3 stages of pipeline tests

It is not so easy to develop pipeline at certain level of complication. I was describing pipeline testing framework in one of the earlier posts. It is great tool to catch many defects very fast. However, in my opinion it is capable of doing so for about 50% of them. The rest leaks to production. Why is this happening?

In the past I created a post dealing with the problem of 4 great variables. So here the problem hits us in practice: it is only possible to catch most of the defects related to CODE using pipeline testing framework but none related to ENVIRONMENT or CONFIGURATION or DATA.

What can we do? We need to add extra testing stages which will fill in the gaps.

1. stage – pipeline testing framework

It can catch most of the problems related to code: logic problems, syntax problems and so on. Multiple tests are possible to be written verifying logic paths, variable values, overall syntax correctness and expected communication with other jobs or libraries (names, input parameters etc.).

2. stage – jenkins validation

In this stage we send pipeline under test to jenkins for validation. Two steps are required:

More information on this is here: https://www.jenkins.io/doc/book/managing/cli/ and here: https://www.jenkins.io/doc/book/pipeline/development/.

The same code which passed 1. stage should be sent to jenkins application to run declarative-linter command. It is able to find code problems which pipeline testing framework cannot. These are defects related to Jenkins specific things like mandatory sections required by declarative pipelines. The best example for me is STEPS section which cannot be missing but 1. stage is not able to validate it properly.

3. stage – pipeline draft run

This stage is meant to catch the rest of the defects. While it means this should be able to detect configuration, environment and data problems it needs to be as similar to production run as possible. To achieve it, the following is needed in my opinion for draft run to be useful:

  • it should use the same files as production
  • it cannot influence external systems in any way
  • it should be possible to run it fast

Two first points there can be fixed by creating draft launcher job which sets configuration in a way no interaction with outside world is made (Jira communication, repository communication, user communication are all set to off). Draft jobs are created in Jenkins which are using the same files as production jobs but with modified configuration by draft launcher.

At this point, pipeline draft run is possible but it will take the same amount of time as production one. This is not desired behaviour for sure. We come to the third point here.

This point is hard to do but really important. To make the pipeline run fast, all time consuming operations need to be replaced with some dummy operations. It means for pipeline to run successfully, real data from previous run has to be used.

I am solving this problem in this way:

  • pipeline puts build artifacts to shared disk so that they are accessible by all Jenkins nodes
  • it is reusing them in consecutive stages/jobs
  • at the same time I can reach build artifacts location from the past run and copy it aside to draft workspace to use it in draft pipeline
  • during draft run job which is building application reconfigures its workspace (configuration parameter) to draft workspace just after cloning repository
  • building phase can be completely skipped and few unit tests can be run just to check reporting in Jenkins (configuration parameter)
  • etc.

Conclusion

Having all 3 stages in place gives me high confidence pipeline works properly. Moreover it is possible to develop it rapidly without testing it in production. I get the feedback about current code, configuration, environment and data. I can also change any of this variable values and retest quickly. Fast feedback is the essence of development. We all need it.

Leave a Reply

Your email address will not be published. Required fields are marked *

*