MCPA MuleSoft Certified Platform Architect Level 1 – Getting Production Ready Part 4

  1. Integration Tests

Let us understand how to design the automated API Centric integration test. The scenarios should comprise both functional and nonfunctional tests, which means it should include the NFRS and the performance test as well. The test scenarios of an API are driven by the API specification and in particular the RAML definition of the API.

These scenarios and test cases should be defined on the basis of just the discoverable documentation available from the Any Point Exchange entry for that particular API. Okay? So whoever is documenting the integration test, they should just be able to access the Any Point Exchange, look at the documentation that the Any Point Exchange has and only purely based on the documentation and knowledge they gained from that exchange asset. They have to compose the tests for integration testing. Why? This has two benefits.

One is this guards against pointlessly writing test scenarios for an API that are irrelevant. Okay? There might be some issues internal in the API implementation that may not come into light based on the spec of the API, but if someone has idea of the APA implementation, they would unnecessarily write that test scenario as well. Instead, if they write based on the documentation, then the concentration will be only on the actual severance of the tape. How it works. This is first benefit. Second benefit is it also highlights the deficiencies of the documentation that you have on the Any Point Exchange. Okay, so if your documentation is poor, the integration testers can come up with the test scenario as well. So this says that the documentation is not well documented.

So this also gives you the drawbacks in there. So both ways it is beneficial if the testers only start documenting the scenarios based on the documentation in the Any Point Exchange. Next thing is there should not be any single interaction in the APA notebook of an API that is not covered by the test scenario. Okay? So it can be against chicken scenario.

For example, if the APA notebooks are already created before the testers have started the documentation of test scenarios, then upon finishing the test scenarios, you should make sure that if there are any extra or irrelevant interactions documented in the APA notebook, they have to go and remove them. But if the test scenarios are already written first, then you are doing the API notebook interactions and all you have to make sure that your interactions will cover as per the test scenario only.

Unnecessarily, we should not be giving any extra interactions in the API notebook. Then you automatically execute the test cases of each API by invoking an actual API endpoint over Https. Okay? It is not in an embedded runtime environment or anything. This should be executed in an actual environment with actual API endpoint. So this requires the API client sending these test API invocations to fulfill all requirements imposed by the APA endpoint and its policies.

Correct? Because if you’re actually hitting an original endpoint, then you have to have the client ID secret or the worth token access with you. Unless you provide that, the cause will fail. So all such things should be provided and a pay invocation is to be made from the expected VPC or subnet wherever the policies I have or white listing is provided and all. Okay? The most basic test assertions are those that adhere for the RAML definition in terms of data types, mirror types, et cetera. Okay? This is the basic things you have to first start with whether the spec is being tested properly or not.

The data types are matching or not, meter types are matching or not. This is the basic thing that has to be covered apart from the actual functional ones, and they must execute in special production like staging environment in order to test the performance as well.

Okay? Any alerting things to be added and all should also be promoted to that particular environment so that if the alerts are going properly as per the expectation of business or support can also be tested. If you have any special dependencies in order to reach the performance, they have to be called out and make sure that your particular environment has all those things. And then a safe subset of the integration test can also be run in production as a deployment verification test would.

Okay, I know that not all customers or the enterprises encourage this method because of the risk, but if your project allows that thing, you can actually have a subset of the integration test, not the full suite to execute. Every time a deployment happens to production, so that it tests at least the basic things like activity or the authentication or any other end of ours. And all just like a small deployment verification test. So these comprise the way how you should design your integration test so that it gives cons confidence of your APIs in the application networks. Let us move on to the next lecture and discuss about the unit testing scenarios, okay?

  1. Unit Tests

Let us see about unit testing API implementations. API implementations are typically characterized by numerous and complex interactions, correct? Because you’ve seen throughout the course, right, the experience APIs such as our create sales order API invokes a process API such as our validate external ID or again create sales order process API and data process API will in turn call some multiple system APIs and system API will call multiple backend systems. So you know, this drama by this time it is a very three layered thing, is a very complex yet helpful designer architecture cut.

So unit testing such complex AP implementations can be very difficult due to the need for dealing with all these dependencies of APA implementation. Okay? So with ammunit, APA platform provides a dedicated unit testing tool that is specifically designed for unit testing the mule applications, okay? Although you can use many other third party tools like Ja Unit and others, amunit is the best fit because of this complexity of APIs as it is offered by mule. And it actually supports you to write unit tests correctly for this kind of API applications.

For example, you can easily stab out external dependencies of the mule application so that without the process API ready, you can have the unit test for your Experience APIs in place so that your piece of work is completed. Okay? And unit also has its own ID environment, which is any point studio. So it’s the same studio a Ukitis one for both your actual AP implementation as your Amunit, okay? In fact, Amunit also the flows look similar to your actual flow, so you won’t feel like a different technology are working on or different code.

And the good thing is these Ammunits can be included as part of your Maven build as part of CACD and it has an ammunition Maven plugin. So these tests can be run internally during the CI process only and make sure that your tests are working properly and then only deploy. So this has full integration support for DevOps as well. So this is the way you can write your tests for the unit testing, which is Amun unit component which is out of the box from New Soft, has all the features. Okay, let us move on to the next lecture in the course. Happy learning.

  1. Testing Resilience

Like we have discussed in the introduction of this particular section, the main foundation of application networks is a web of highly interconnected APIs. That is why resilience testing is really important. What is this resilience testing? To be frank, a decade back, there was no open testing called resilience testing. This was never part of the typical testing lifecycle, where it starts with unit testing, integration testing, system testing, and acceptance testing, right? This was never part of that particular chain of the tests. Okay? No tester used to specifically follow as guidelines or a practice in every organization it existed. Some of the organizations were doing it, but it was not a general norm.

But now with this microservices architectures and in the world of the cloud and all, the resilience testing is really become important. And you know who brought this? Netflix. Netflix started this and made popular with the term resilience testing to test their micro services by implementing this particular resilience testing to check the health and respectability of their environments. Okay, so what is this? The resilience testing is the practice of disrupting that web and asserting the resulting inevitable degradation of the quality of all relevant services or APIs offered in that particular application.

Networks are whether still within the acceptable limits or not. Meaning we intentionally test in such a way that we break the same SLS we have implemented on our particular APIs or some policies that we enforced on our APIs or not. Just about APIs in general, any services or any programs that are running, there must be some kind of constraints or rules we apply, right? So what we do is we intentionally try to go and disturb them and still see what is the behavior of the application.

Is it breaking? Is it still responding? If it is still responding, is it under the acceptable criteria or not? Is that performance acceptable under this situation? So all these kind of things will be analyzed. That is why this resilience testing is really an important practice if you are moving towards this API connectivity and the application networks or any other microservices based architectures.

Okay, what can we do, or what we typically do in this kind of testing? So, I’m just highlighting some of the points. They are not the final ones, but just some of the important ones that you can do in this resilience testing. One is we can develop some custom software tools by implementing the notions of chaos engineering. So, chaos engineering is again a term invented by the Netflix or made popular by the Netflix, where the term chaos is selfexplanatory night.

This particular custom software tool will go and create chaos in your environment, intentional chaos, to just see or observe the behavior of your applications. So, Netflix has their own tools. If you want to just go and inquire about it or learn about them. They have tools called Chaos monkeys, say, me an army. And these kind of tools that actually go and do this kind of stuff in the Netflix environments, okay?

So you can come up with similar tools, subset of them, if it is not that much required for you to have the full features of those and try to bring in the same behavior by disturbing your architecture, your design, according to your environment rules. Okay? So what this custom tool should ideally do is it should just act like an any other API client and hit your API, I’m talking from the API perspective.

So it will be just like any other API client calls your APIs and then injects the request by exceeding or trying to exceed the rate limiting and trying to exceed the number of logins allowed, if you have any such thing in your integrated identity provider and trying to pass in the wrong or expired tokens, such kind of stuff. That is one aspect of it. The second aspect is your tools will call the platform APIs and try to do things like remove a particular policy enforcement or add a particular policy enforcement on the fly. Or go and try to remove one of the worker from your current number of workers. Or add one more or change the V course from higher to lower.

So these kind of things you can write code for in your tool. So while this tests run, when I say test the stuff that this is doing, like that means adding or removing the policies, changing the workers, RV course and all while this set of tests get executed. What you have to do is same time you go and run your integration test regression suites. Okay? So those tests will test the functionality of your applications, right? Like the business domain functionality testing will be happening through those tests. So with both of these running side by side, you will know what are the impacts by looking at the results of your integration tests.

The regression test suites, correct? In ideal scenarios anywhere those regression tests may pass if everything is healthy. Correct? So what you are interested here is to see how stable is your environment. So you will run these two side by side. So while one is going and creating chaos in your environment, the other will just try to test the behavior of your application as usual, okay? This way you will understand where the actual problems are, or where are the behaviors that are expected and inevitable. Where you can fine tune more or where you can do the needful if applicable, right?

So this is what is about the resilience testing. So the most of the time you need to invest is to bring up or come up with the things you want to mark as chaos for your particular environment or application. Because there are no general fixed list that says okay, these are the things you have to do might be. There are some with respect to delete a server, add a server that is common thing, but there will be more with respect to your domain, your application in your environment.

So you have to come up with that list and implement a tool to do those particular tests by adding or removing. That is the main investment of time in resale testing. Once they are there, they are fixed and you can keep running those whenever you want to test your stability. Okay, let us move on to the next lecture in the section. Happy learning.

img