The Great 4 Variables and how to use them to tame the heavy tests instability

The Great 4 Variables

In world of testing we just need to care of the 4 Variables. If we can test all the combinations of values they can have we can be 100% sure our software works. Unfortunately, they are really big ones:

  • code
  • configuration
  • data
  • environment

The code is where we focus most often: unit tests, integration tests, system tests. We now the stuff.

The configuration: is less obvious variable. Most often system under test is using default configuration and we miss important aspect of it. We definetely should create and use configuration tests to learn if the configuration is actually working and how it affects the system.

The data is the real nightmare. The infinite number of possible combinations both internal data (application’s database) and external (data coming from outside) can be spoiled, corrupted or just unsupported is turning the risk of the system under test to fail into the sure thing.

The environment is very much underestimated thing: there are really strange OS configurations, versions and patches which can magically render our fantastic application unusable.

This is a real source of all the variety of defects we encounter when assuring the quality!

I do not want to be writing more details about the 4 Variables, I can recommend the great book which discusses the topic in detail (and other topics as well):  Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation

The actual topic I want to cover in this post is the problem of heavy tests instability, and I want to use the Great 4 Variables to solve the problem.

The problem of heavy tests – pass, fail, error

We all know heavy tests. We often refer to them as system or large tests. They need application runtime. They start slowly, work slowly and most often after long period of time instead of saying pass or fail they just say ERROR. Welcome to the world of passfailerror 🙂

Everything related to the heavy test is slow: it is slow to create it, it takes much time to run it and finally it takes very much time to find the reason of error or failure. If we could only reduce error rate to minimum…

The solution

Let’s imagine we are in the project with many heavy tests which often produce error result. We need to make small steps to decrease the unwanted status. Firstly we need to know the reason of test errors and we need to be able to get this information as quickly as possible when looking at the test result.

Let’s use the 4 Variables for that purpose.

We just need to realize, our testing code is just the same application as any other one. It is also affected by the Great 4 Variables. To execute the heavy test we need the testing code, testing configuration, test data and test environment. We are going to be asserting things in some way and assertions may contain defect. We may be configuring the test in a way it doesn’t work or produces error results. Our test data may be changed by previous test run and thus produce errors or even worse: not credible pass/fail results. Finally our test environment may be malfunctioning or just not working at all (the simplest example is when using docker’s container which is set up at the beginning of the tests unsuccessfully and thus doesn’t work when test starts).

But how can we apply that knowledge? We just need to test all the Variables BEFORE test is started to have confidence it will produce credible result of pass or fail and will not produce any error.

  1. test code – this is going to be real life scenario: I was using Cucumber testing framework as DSL in the project with some nice Selenium back end. The tests were quite stable, the results were clear. What a surprise – for few days they were completely wrong. Because of the defect in Cucumber layer they were reporting PASS no matter what the real result was. The lesson learnt is: always create unit tests for the testing code!
  2. test configuration – when the testing code should enter some special state for the specific test suite we can test if the state is correct. For example, if we have some DSL which has configuration for the application end point is uses (REST, custom client, GUI) it is possible to test if the test code is in GUI mode when GUI test starts.
  3. test data – when there is any chance the test data may be corrupted before testing starts it is neccessary to test if it didn’t change. Again, the real life scenario is I was working with the heavy tests which in certain circumstances when application under test was interrupted abnormally were corrupting the test data. So, if the test data is not safe for some reason it is good idea to check if it is correct before test starts.
  4. test environment – this is very well known issue test environment is down. Either docker container failed to start or maybe the problem is caused by the fact the test environment is used by other teams which planned the machine restart just when our tests start… But I would say we can not only test if it is up, but also do more precise checks like if it has sufficient resources to run the test (memory, processor, disk) or if the test user has some needed permissions for the test to start.

Once we have the checks in place they should be visible as separate items in our pipeline so that we know if our heavy test started at all and if not, what was the reason it failed to start: was it the code, configuration, data or environment failure?

After this we can make improvement actions so that is doesn’t repeat again. We do not loose any time to analyse long log files just to realize after 15 minutes the red status doesn’t mean our application under test has defect in critical area but that the test environment just ran out of space.

Of course, when any of the Variables is not problematic in our pipeline and never cause any problems we can safely skip it. We just need to monitor the Variables which are causing the instability of the heavy tests.

Conclusion

When creating the pipeline to have continuous delivery or at least continuous integration in place we cannot afford to loose time on understanding the results. The message has to be clear for the small, medium and large tests at least for the fact if we actually are having a defect or not. We need to know it instantly.

It is hard to achieve especially for large/heavy tests. In my opinion the best way to solve the problem is to test the Great 4 Variables which are concerning our tests to filter out any failures which are not related to application under test.

Let’s use the saved time for more important tasks.

 

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

*