Category Archives: QA in general

Project knowledge maintenance

I can see the neverending problem with knowledge in organisations. On one hand people move around, they come and go, on the other hand information is increasing expotentially, giving both new facts and invalidating older ones. The problem is, the main vehicle for information are the people. There are always attempts to make the information independent by storing it in various ways but at least from my experience it is always ineffective: e-mails, various web pages, documents here and there it all makes the access hard and time consuming and eventually the information you get is often out of date and incomplete.

OWL expert system

I was looking for possible solutions and in my opinion very promising one could be a system containing of OWL knowledge base with a reasoner and some user interface which would constitute a expert system to store the knowledge in one place independently of employees and to provide access to everyone eligible at the same time. OWL is a language to represent knowledge about things and its relations. Knowledge base is just a collection of facts where all of them need to be typed in manually. Adding a reasoner however allows to unhide so called “inferred facts” which are usually created by our mind. Adding user interface is self-explanatory.

Let me give you very simple example. Imagine knowledge base which contains facts about 3 Things and its relations. Let’s name it “3 Things knowledge base”.

3 Things Knowledge base

There is a very nice tool for maintaining knowledge base itself: Protege. Clear interface allows to build and maintain knowledge base easily both in Windows and via web interface.

Let’s use it to create ontology for this expert system (I use the term knowledge base and ontology interchangibly).

Firstly, we need to create the minimum amount of facts. To do this we need to create 3 classes:

  • BigThing
  • MediumThing
  • SmallThing

classes

Secondly, it is required to name the relation between classes by adding object properties (and their hierarchy):

  • contains
  • containsDirectly
  • contains is a transitive parent of containsDirectly

The implementation of the containers’ relationships is not straightforward. It should be split into transitive object property “contains” and its subproperty “containsDirectly” as on the Protege screenshot:

object_properties

The important thing is the transitivity of “contains” (which means that if A contains B and B contains C then A contains C).

More on the types of object properties can be read for example HERE

Thirdly, we need to use the object properties to store the information about the class relations and add class instances (individuals) at the same time:

  • BigThing containsDirectly MediumThing
  • instance is bigbox

bigthing

  • MediumThing containsDirectly SmallThing
  • instance is mediumbox

mediumthing

  • SmallThing instance is smallbox

smallthing

Notice, we do not need to put any facts related to SmallThing. You can see the text representation of the knowledge base HERE

Reasoner

It is required to apply some reasoner to the ontology. This is to check consistency as well as retrieve inferred facts. There are many more features and details but this is out of scope of this article…

I use HermiT reasoner in this example (HermiT.jar is required to run the program).

We need it for example to have inferred fact that BigThing contains SmallThing – there is no fact like this in knowledge base!

API

Also, OWL-API is required for reasoner to interact with the ontology. OWL-API.jar is needed in this example. When writing OWL-API and reasoner code I used very much of THIS example (I am using version 3.1.0).

GUI

Now, we need to use DL query to get the information we are interested in. DL query is the syntax you use to get information from ontology. As the result of the query you may get superclasses, classes and subclasses as well as individuals. We are interested in subclasses of the response in this example as well as individuals. The query looks like this:

contains some SmallThing

As the response we are going to get information which classes (which are going to be subclasses of this query) and which individuals of the ontology meet this condition.

As the result of our work we have such a expert system at this point:

owl-api

Which works like this:

You can get the full code of the application from HERE.

Problems

It may be looking simple to do but unfortunately it is not easy to create the proper knowledge base even for simple case like here.

There are good practices you have to know before you can start creating knowledge base. For example to implement contener’s hierarchy like in this example one needs to read THIS.

Other than that, the reasoners differ from each other and they support various features so it is possible given reasoner will not be able to operate on the ontology as in this example.

Last but not least DL query is not intuitive way of asking questions in my opinion and I think it would be problematic to create a good translator of English sentences into DL queries.

 

Application of ontologies in QA world

The problem of knowledge base is very wide and interesting. When thinking of QA area I am thinking about the OWL knowledge base which contains information about application under test. This could be storing information about every aspect of the project: all its abstraction levels – starting from business information (usage, typical load etc.) down till classes and the design (detailed descriptions). This could be available both to project team members (developers to catch up with the code quickly, testers to understand how to use it etc.) and other applications which could benefit from such knowledge for example automatic testing tools (automatic exploration tests, automatic non functional tests etc.).

It was just the touch of the project knowledge maintenance problem which for sure is very complex to solve but it is very universal in my opinion as well and that’s why it is really worth to keep on experimenting and trying to find the solution.

I really mean it was just a touch – just take a look at THIS.

Short about mind maps

Human nature

I think mind maps are still something which is not used to the extent it could be. The idea is very powerful: to use our brain more efficiently. Humans in general memorize visual things, maybe also meaningful sentences, sometimes melodies or body movement sequence (dance) but definitely not numbers nor random strings. There is funny situation in my opinion when passwords are concerned: most of the systems require so called strong password. It is a contemporary myth though: the stronger password they require the higher the probability user will not memorize it. In such case, user will write it down or will try to use the same one for many systems. Does it increase the security? No, it works the opposite way…

Anyway, I like the idea of mind maps as they are truly designed for humans which is rare in today’s systems.

 

Mind map applications

Mind maps are visual representation of information. They appear as colourful nodes connected with colourful lines.

People in general use it for things like:
– brain storming,
– making fast notes,
– learning things (I tried this way and for me it really works)

I do not want to write about these points – they are more obvious and there is information in internet covering these topics. I would like to show you how to use it in other ways which are less obvious.

There are few applications available (I know Freeplane and Freemind) which implement mind map. I personally prefer Freeplane and all the examples here are done by that application.

Knowledge base

When joining the project very often there is a situation of disspered project know-how. Actually, I have never met a situation of solid knowledge base not to mention expert system on top of it. Information1 is in email, Information2 is on that web page and Information3 is known to that guy over there only. This is the reality in which we often have to start to work.
It is handy to start using mind map as a personal knowledge base, like this one for example:

knowledge base example

knowledge base example

Every node in Freeplane can be marked with colours and shapes and most importantly can have a link to any resource located both locally and remotely.

OS extra layer

We can go further and use it as an extra layer pu on the top of OS. The specialized layer which concerns our project domain only, perfectly customized… let’s call it Project Map.
Let’s move our knowledge base aside – it will be one of the child nodes in our Project Map.
It will contain things like:
– knowledge base which we continuously expand,
– links to things like scripts, reports, files,
– links to external data

There is no point in losing time on clicking through Window menus, recalling locations of various things. This is distracting and it slows down the work. Everything is now 1 click away.

Take a look at the screenshot:

ProjectMap

OS overlay example

Again, every node can be clicked and will trigger an action which can be anything from navigating to other mind map to launching an application.

Nice feature Freeplane has is to export to Java applet. Please take a look at this link:

— project mind map —

Thus you can also share the mind map with the team or the world just as I did.

Summary

There are many examples in the web. I am sure you can get inspired if this was not the case after reading this post.

 

Reactive and proactive approach

Theory – friendly requirements

Speaking about quality assurance, there are well known patterns available of how to approach a project. We start with requirements (functional and non-functional), we think how to describe them, we consider behaviour driven development, specification by example and other techniques to achieve clear communication with customer. We create test cases using test design techniques to achieve the right coverage. We have some schedule to fit in, we can plan our actions.

requirements approach

requirements approach – proactive

This is all very useful for the project start (new feature start), this is very nice to have all these in the project. I call it proactive approach – I can act before any defect is planted to the code. I can set my defense even before anything was started. I have time.

Reality – angry user

How many times were you assigned to the project where things are set up in the right way? I think in most cases we are thrown in the middle of the sophisticated project which has many problems and where end users or client is complaining about many things. And they want you to act fast. How to act effectively in such a difficult environment?

I say – act reactively first. Group all the defects reported in the production and think of the most efficient actions so that when a defect is solved, not only it is retested, but also whole area of the bug is secured from quality point of view in the project as well. Such an approach improves project in places which are most important for the end user. When it is done, the user calms down and you can move balance to proactive approach, which will improve the project in the long term.

field feedback

field feedback – reactive

Examples

Let me give you some examples. They are real one.

1.

User is complaining about not seeing data in the application in some places. It turns out, there is a problem with database connection as nice db guys forgot to tell your team they changed the connection string over the weekend. Well, it seems like it is not a defect for us, is it ? Other user is complaning of not being able to send anything from the application to some other one. It turns out, there is hardware malfunction of the remote application, again it doesn’t seem to be our problem. Finally the third one cannot save the data to the database – this time it seems we have typo in connection string in the production so let’s fix it.

At this point it is important to notice these are all configuration related issues and we can improve this area in the project by introducing configuration testing. We must be missing it by now. This can be easily added as component (unit/small) testing. All the configurations we have in the code base should be tested during each build to catch configuration strings’ typos as well as to be aware of the status of external systems to know in turn if the remote system is having problem or it was reconfigured for some reason. On top of it we can build configuration tool which would read production configuration and do the quick configuration check if in doubt if the production issues is related to connection problems or not.

2.

Few users are complaining the application works very slowly. All the data is loading slowly and it causes business problem for them. It turns out, there is a defect in which timeout configuration setting – although set in configuration file – is not applied in the runtime which degrades the performance significantly.

This problem may be solved by introducing simple automatic comparison testing, where in the test environment a comparison of various configuration settings with runtime settings as displayed in the log files is made.

3.

There are few defects with major priority reported, which are related to the core functionality of the application.

Well, it is easy one – actually the basics. The goal is to create the pyramid like set of test cases (many component tests, less integration tests and few system tests) which will cover this functionality as fast as possible to prevent defects from plaguing this area which is most important for end user.

4.

Users again are complaining about performance, the description of the issue they give is very general and doesn’t help to narrow down the area where defect is hidden.

Log analysis is needed to find out how application characteristic in production looks like. It may occur, logging is not properly implemented and much information is missing or the other way around logging gives too much information. In such case developer team should start improving this to make log analysis easier in the future.

5.

Developers are spending much time supporting the application as support team is asking them for help twice a day.

This is bad situation which slows down the project very much. Developers get frustrated as they cannot concentrate on 1 thing during work day. Project work is slowed down as instead of improving things or implementing new features devs are talking for hours with support or end users.

The best reaction in my opinion is to push as much activities to support as possible and also start working on things which will allow more things to be pushed away soon. Well, easy to say, harder to do… but possible. Again, we need to group problems which are reported by support:

– what does this button do? how to do this or that in application? – this is application knowledge question and should not be asked at all to developers; knowledge base should be started or even better expert system should be built to allow support to get the needed answers on their own

– the app is not working, there is no data in that window – if it turns out it is external system which doesn’t work, let support guys use configuration check tool (create it immediately if it is not available); they should be able to tell by themselves if there is problem within the app or else it is just external connection problem

– the app is not working, some issue appears – support guys should be able to understand what is the sequence of events which preceeded the issue; if they are not able to say anything maybe they need more clear logging, some memory usage history or more information in log files to be able to understand the state of the system quickly and also to immediately deliver the needed information to fix the defect for developer.

 The proactive reality

I mentioned in the last section in my opinion we should use proactive approach anyway. However, it isn’t hard to think of a project which has problems with requirements. We can run into trouble here as well. What can we do not to rely on production feedback only and start pin pointing the defects before they reach end user? If we do not want to base on field feedback and requirements are not there or maybe are not complete at the same time, a very good method of improving proactive approach is comparison testing. I show it on the diagram – we have 2 red arrows showing missing or incomplete communication for requirements and feedback:

comparisons

comparisons – improving proactive

Testing by comparison is a powerful technique.

We compare previous and present version of the system or single component. Previous version can be yesterday build or last one which was released to production depending on our needs.

What can we be comparing actually?

–  performance – this would be so called benchmarking to see if newest version is faster or slower

–  functional behaviour – any discrepancies indicate an issue which in turn might mean a defect

–  logs – differences (more logging, less logging, more warnings, errors etc.) may indicate a defect

– memory consumption – differences between consecutive test runs may indicate a memory leak defect (notice that we can observe it in comparison testing, even if no out of memory exception is thrown)

Summary

In my opinion it is good if we can act proactively but if the application causes problems for end users we should start with field defects analysis – reactive approach. We need to use production feedback in a smart way to get the large effect from small piece of information and small effort as well. After this, we should move to proactive part which often may be not so simple to handle in reality, but comparison testing improves this approach in significant way.

We are proactive unless production issues are reported again which make us react in a smart way again. In time, the application becomes more stable and reliable and only then can we finally do exactly what standard QA theory says.

 

ELK == elasticsearch, kibana, logstash

Log analysis

Often we have a need to analyze log to get some information about the application. ELK is the one of the possible solutions for this problem. The basic source of information is HERE.

Here is the picture showing how it may look like:

elk_architecture

ELK architecture example

 

Filebeat – collects log data locally and sends it to logstash

Logstash – parses logs and loads them to elasticsearch

Elasticsearch – no-sql database – stores the data in the structure of “indexes”, “document types” and “types”; mappings may exist to alter the way given types are stored

Kibana – visualize the data

Desktop – user may use web browser to visualize data via Kibana or create a code to interact with elasticsearch via REST interface.

Besides installing and configuring all the stuff one needs to remember also about deleting old data in elasticsearch not to exceed disk space – logs can take huge amount of space as you know. I achieved that by creating some crontab started simple Perl script which parses index names and deletes the ones which are older than given amount of days. Automatic maintenance is a must for sure.

There is much in the web about Elk so I will not be giving any more details here. I would like to concentrate only on one thing which was hardest and took me longest time to achieve – logstash configuration.

Logstash

The documentation there on the page seems to be rich and complete, however for some reason it was very unhandy for me. Ok, I say more, the authors did the trick and created a page which was almost useless for me. I do not know exactly the reason but maybe it is because of too few examples, or maybe they assumed the knowledge of average reader is much higher than mine. Anyway, I needed really much time to achieve the right configuration for all items and especially for logstash.

Let’s get to the details now. Look at this configuration file:

 

  •  1-5 lines are self explanatory – logstash receives input from filebeat
  •  7+ – start of filtering data basing on the type which is defined there in the filebeat config (surefire will come from other machine, so no surefire document type here):

  •  12 – parsing of the line (seen under message variable by logstash) of which the example is: “01-01-16 13:05:26,238 INFO some log information”

now the mapping is created:

  • date = 01-01-16
  • time = 13:05:26,238
  • loglevel = INFO
  • detailed_action – “some log information”

 

  • 17 – parsing “message” again to get datetime = 01-01-16 13:05:26,238
  • 21-25 – date filter to explicitely set timezone to be UTC – this is important to have all the time date marked with proper timezone so that it can give proper results when analyzed afterwards
  • 29-48 – it is the same for testlogfile type with one difference:
  • 46 – this is how logstash can conditionally add field; when at least 2 words are encountered in detailed_action field, logstash will create duplicate of detailed_action named detailed_action_notAnalyzed (this is required when creating a mapping in elasticsearch which in turn allows to look for group of words – see the end of the post)
  • 50-86 – surefire type which is interesting because it is xml data
  • 52-60 – does 2 things: firstly is cleaning the input line from non-printable characters and extra white space and secondly it adds datetime field (logstash internal %{[@timestamp]} field is used); we don’t have datetime here as the opposite of the regular log data so we have to add it in logstash
  • 61-78 – xml filter which maps xpath expression to field name, for example <testsuite “name”=”suite_name”> will turn into: test_suite = suite_name
  • 79-84 – works around surefire plugin problem, that no status is shown in xml file when test cases passes, do you maybe know why somebody invented it this way? this is really frustrating and also stupid in my opinion…
  • 87-96 – architecture type is checked here to determine filename (without full path), it comes from this place in filebeat config:

  •  101-104 – very important feature is here: the checksum basing on 3 fields is generated after which it is assigned to metadata field “my_checksum”; this will be used for generating document_id when shipping to elasticsearch which in turn allow to prevent duplicates in elasticsearch (imagine that you need to reload the data from the same server next day from rolling log files, you would store many duplicates but having checksum in place will allow only new entires to show up in database)
  • 110-141 – output section which has type based conditions
  • 116-118 – logfile document type will be sent to elasticsearch to index “applicationName-2016.05.26″, “logfile” document_type will be created with generated checksum document_id (to prevent duplicates)
  • 112,122,132 – commented lines when uncommented serve for debug purpose: output is sent to elasticsearch and console at the same time

Other usage scenarios

After investing much effort in the solution I am also experimenting with other usage – not only for log data – to get more value in the project. After regression test suite is run I send surefire reports and test execution logs (this is domain language level) and view them via dashboard which also collects all application log data (server and client) at the same time. This gives consistent view on test run.

The interesting ability is also REST interface to elasticsearch. I use it for programmatically downloading the data, processing it and uploading the result back there. This is for the purpose of performance analysis where information in the log files as logged by the application requires processing before conclusions can be made.

This ability allows for example of creating very complex automatic integration or even system integration tests, where each component would know the exact state of the other by reading its logs from elasticsearch. Of course we should avoid complex and heavy tests (how long does it take to find a problem when system integration test fails…? ) but if you have a need of creating few of them at least it is possible from technical point of view.

 

In general…

… it is a nice tool. I would like just to name ELK drawbacks here – it is easier to find advantages in the web I think:

  • it is hard to delete the data from elasticsearch so most often one needs to rely on index names
  • logstash with complex configuration can start even 1 minute – any logstash configuration tests require separate logstash instance then
  • it is hard to debug logstash
  • there is no good way of visualizing the data in Kibana when you are not interested in aggregates; if there is some event logged, you can display information like how many times per day/hour/minute it occurs but you cannot do it like you would in gnuplot for example
  • the search engine is illogic: to be able to find string like “long data log” one needs to have this fields stored as “string not analyzed field” (the default behaviour for strings is “string analyzed” when you can only search for single words); there is a trick to do the appropriate mapping in elasticsearch and store string as “analyzed” and “not analyzed” at the same time (if it is let’s say log_message “analyzed” string type, log_message.raw “not analyzed” variant is created at the same time) but kibana cannot work with *.raw fields; the mapping I am talking about looks like this:

So, you need to split the log_message in logstash to create two separate fields (look at line 46 of logstash config discussed above) e.g. log_message and log_message_notAnalyzed. Otherwise to search for “long log data” string in kibana you have to write this thing:

Which searches also for things you do not want to find: “log data long”, “log stuff stuff long stuff data”, “stuff long log stuff stuff data” etc. This is really a problem given the need of finding few word strings is very common thing.

That’s it for disadvantages. I think ELK does the job of log analysis very well anyway. For sure it is worth to try.

Get the right coverage

Test design techniques

Let’s think about quality control basics for a moment. In my opinion this is the most important thing to be able to design the right test case. No matter if this is automated or manual we need to get the confidence software we are working on has a very low probability of still unrevealed functional defects in the area covered by our test cases. We do not have to reinvent the wheel as we have already test design techniques there in place to help us achieve this goal. Because it is basics of the basics for QA engineer you know all of them and apply them in practice aren’t you… ? I can tell you basing on my interview experience in reality the majority of QA engineers heard about some of them but also majority is not applying them in practice (as described here). If you are by any chance in this notorious majority, I hope you read all this no to be part of this group of people anymore.

Pair wise testing

I would like to concentrate in this article on the most advanced test design technique in my opinion, or at least most interesting from my point of view which is pair-wise testing. The purpose of this technique is to deal with situation when number of combinations we have to test is too large. Because we have combinations almost everywhere this is extremly important thing to know. The of combinatorial testing problems are for example:

  • application setting page with many switches (we need to know if some combination of settings we choose doesn’t influence any of them – Notepad++ preferences),
  • software designed for many platforms (the combination is here array of operating systems – UNIX, mobile, Windows – and external software combinations – database vendors, database providers),
  • aplication which has REST or SOAP web service interface (number of available combination of input data – application accepts POST message in XML format, some of the elements are mandatory, some of them are optional)

The idea behind pair-wise technique is to focus on pairs instead of all combinations.
For example, let’s imagine we have 3 inputs where each of them accepts one letter at a time. 1. input accepts only letters (A,B), 2. (A,B) and 3. (A,B,C). We can easily write all combinations for such model (2x2x3=12 combinations):

1 => (A,B)
2 => (A,B)
3 => (A,B,C)

full coverage – all combinations
no 1 2 3
1 A A A
2 A A B
3 A A C
4 A B A
5 A B B
6 A B C
7 B A A
8 B A B
9 B A C
10 B B A
11 B B B
12 B B C

Of course, in such a case we do not need any special approach, we can test all of them. But let’s think of a situation each combination takes 1 week to execute or else that we have 3 inputs where range A-Z is accepted or when each input accepts more then one letter.
We can decrease the coverage for 100% (all combinations) to all pairs. Please notice 100% coverage here means actually all triplets. We are actually moving from all triplets to all pairs now:

Let’s enumerate all pair combinations as we are interested now only in pairs:

pairs listing
no 1 2 3
1 A A
2 A B
3 B A
4 B B
5 A A
6 A B
7 A C
8 B A
9 B B
10 B C
11 A A
12 A B
13 A C
14 B A
15 B B
16 B C

Let’s choose the subset of combinations which will have all the pairs listed above. Consider this:

reducing number of combinations
no Comb. 1 2 3 comment
1 AAA AA_ _AA A_A we don’t need this, we have these pairs in AAC, BAA and ABA
2 AAB AA_ _AB A_B we don’t need this, we have these pairs in AAC, BAB and ABB
3 AAC AA_ _AC A_C
4 ABA AB_ _BA A_A
5 ABB AB_ _BB A_B
6 ABC AB_ _BC A_C
7 BAA BA_ _AA B_A
8 BAB BA_ _AB B_B
9 BAC BA_ _AC B_C we don’t need this, we have these pairs in BAB, AAC and BBC
10 BBA BB_ _BA B_A we don’t need this, we have these pairs in BBC, ABA and BAA
11 BBB BB_ _BB B_B we don’t need this, we have these pairs in BBC, ABB and BAB
12 BBC BB_ _BC B_C

So, we can use now just 7 combinations out of 12 originally:

all pairs coverage
no Comb. 1 2 3 comment
1 AAC AA _AC A_C
2 ABA AB _BA A_A
3 ABB AB _BB A_B
4 ABC AB _BC A_C
5 BAA BA _AA B_A
6 BAB BA _AB B_B
7 BBC BB _BC B_C

Can we reduce number of combinations more? Yes, we can move from all pairs coverage to single value which would mean that we want to use every possible value for each input at least once and we do not care about any combinations at the same time:

single value coverage
no 1 2 3
1 A A A
2 B B B
3 A B C

In this set of 3 combinations input 1 uses A and B, 2 uses A and B and 3 uses A, B and C which is all that we need.
As you can see we have a nice theory, but we are not going to compute things manually, are we ?

TCases

Let’s use the software for the example from the previous section.
It is named TCases and it is located HERE. I will not be explaining the usage as there is excellent help on the page there (I tell you it is really excellent). It is enough to just say we need input file which is modelling the input and generator file which allows us to set actual coverage. The input file for the example shown above looks like this:

After we say we want to have single value coverage (which is called 1-tuple coverage):

We get the same result as we did manually:

It doesn’t matter 3. test case is different as the only thing which matters there is to use C for the 3. input.

Now, let’s repeat all pairs coverage (named 2-tuple coverage):

The result is:

There is slight difference as number of test cases here is 8 while it was 7 when done manually. This is because TCases by default doesn’t guarantee minimal set of pair combinations. We need to explicitly ask for it by using tcases-reducer (I must say I overlooked it initially – many thanks to Kerry Kimbrough for helping me with this). Looking at this result we can see we need to exclude AAA from this set as the pairs from AAA are also present in test cases id: 2 (AA_), 4 (_AA) and 7 (A_A). Let’s see what happens.
After running tcases-reducer generator file is modified:

After running tcases with new generator the result is surprising:

Wait a minute, we have only 6 combinations, how is this possible?
Let’s combine manual table I created previously with this one:

all pairs coverage
no Comb. 1 2 3 AUTO COMB 1 2 3
1 AAC AA_ _AC A_C  AAC  AA_  _AC  A_C
2 ABA AB_ _BA A_A  ABA  AB  _BA  A_A
3 ABB AB_ _BB A_B AAB AA_  _AB  A_B
4 ABC AB_ _BC A_C BBB  BB_  B_B  _BB
5 BAA BA_ _AA B_A  BAA  BA_  B_A  _AA
6 BAB BA_ _AB B_B BBC BB_  B_C  _BC
7 BBC BB_ _BC B_C

It turns out, it is important what is the sequence in which we are choosing a combination to be left out.
TCases found a way to have only 6 combinations and to cover all pairs ! Different combinations are marked green. As I pointed out there are 16 pairs total: my manual combination set was redundant by 5 pairs and computed combinations are redundant only by 2 pairs (take a look at yellow cells).

And finally let’s change the generator to use triplets which means 100% coverage in our example:

Result:

So it must be the same which is 12 combinations.
As you can see this application is working really nicely.
But what about real usage of this technique in real life testing problems? Let’s apply it!

Practice

Rarely can I find such a superb software like this. I am not sure if you are seeing the power of this tool yet. Let’s use it in practical example.
Let’s assume testing of Notepad++ “recent file history” is required.
The setting looks like this:

recent files history

recent files history

The input model for this testing problem can look like this:

Take a look at customSize_isSetTo – this is dependent variable which is related to customize_maximum_length value of configuration_isSetTo variable.
Now, the simplest test suite with 1-tuple coverage looks like this:

Custom size is set to NA when maximum length is not customized in the test case. This is really important to process dependent variables.
And 2-tuple coverage:

We can know decide if we have enough resources to process 1-tuple or 2-tuple coverage. We need to concentrate only on proper model input to have confidence we are achieving the right coverage. What is also important we have documented the way of creating test cases.
It is a giant leap towards the right coverage.

Statewise testing (GUI automation part IV)

Introduction

You must have heard about state transition tables. This is great test design technique to let us achieve the right coverage when testing with GUI application. We analyze the application from its states perspective to be able to adjust the coverage we can achieve according to resources we have available. In this part about GUI automation I will show how easy we can adapt the testing framework to use it.

Details

We have a framework at this point available which has a DSL layer and it looks very nice (as shown HERE). What we can test at this point (from test design technique point of view) is actually one test case decision table like this:

action TC 1
GIVEN().tab().isOpened(). X
WHEN().tab().isWritten(text). text = “text1″
AND().tab().contentIsCopied(). X
THEN().clipboardContent().shouldBe(equalTo(text)); X

However we need to be able to look on the application from state transition point of view, to be able to act basing on the state transition diagram. We would like to have an automation which can execute state transition diagram by walking the paths we choose.

Let’s look at the example of one – it focuses on the feature LTR/RTL which is text direction left to right and vice versa in Notepad++.

text direction feature in Notepad

text direction feature in Notepad

State transition diagram with text direction focus

State transition diagram with text direction focus

As you can see there is one main state which is one tab selection in Notepad++ (there is always some tab selected). There are also 2 substates which indicate if RTL or LTR is used. So it means when we start using the application we always start with 1 tab selected and LTR text direction.  Then, we can change the text direction or do any other transitions like createNewTab, selectTab or deleteTab. I do not want to write here all the state transition diagram details (I hope to create separate post on test design techniques some day) – let’s only say the basic coverage for this technique should be to execute each state and each transition at least once.
We would like to be able to transform this diagram into Java code and introduce one extra layer on top of we already have in the Sikuli testing framework and then run it.

Let’s do it!

Every of 3 states will be represented as separate class in Java:

Now we can navigate through the state transition diagram using IntelliJ suggestions and we can make any path we wish. As the result we can adjust the coverage to our needs.
The example test case based on the state transition diagram looks like this:

What we test here is we navigate through all the states and transitions at least once and at some point we execute assertTabIsWritable. By doing so, we test if the tab currently selected in the test flow is writable or not. Under the hood we use here Sikuli based testing framework which is described in previous posts.

As you can see we are using the existing framework to check if tab is writable. We are focusing only on this feature. We of course could be checking much more: like when executing createNewTab we could be checking if extra tab was created etc. but I skipped it for simplicity sake.

The video goes here:

You may also view the source code which is
HERE under GUI_automation_part4 branch.

Summary

This is just an example of the automation of tests which are basing on state transition diagram. The power of this approach is related to this test design technique. Firstly, if a requirement is described in this way we are ready for automatic testing. Secondly, in my opinion when such a diagrams which decribe specific funcionality exist in the project, it is possible to use them for example for purpose of retesting. If there is a new feature F1 in functionality A1 it is possible to have automated test for the A1 functionality by executing its state transition diagrams to check if nothing is spoiled there by F1.

The great GUI automation summary

So this would be all I wanted to present about GUI automation. This framework in my opinion is extremely flexible. This could be used for any application – not only GUI. This is just a matter of substitution of Sikuli with other thing like some web service self made framework for testing web service application. One could simplify it from programmatic point of view, one could add tests which would be testing framework itself (unit tests), or else improve the DSL layer. Still even such simple version is usable and shows the potential.
I hope that now the tank comparison from the 1. part (HERE) is more clear :)

Know, understand, apply

I have been doing many interviews recently. This was for QA positions in my company. In my opinion there is a common pattern or better to say PROBLEMS candidates very often have. I group them as minor and major ones.

Minor problems

Lost in details of current project

The majority of candidates is lost in their everyday tasks. They lose the ability to look in more general way on their work, knowledge and self development direction. Often, they are not aware they are missing important knowledge. The lost ones always start with words “in my project…” to answer any question. They often say that in their company project is badly managed and wrong tools are used.

No time management

Sometimes candidate is aware he misses important points. However he immediately adds, he cannot find time to learn as his project uses different tools/technology and he finishes in the evening. Why can’t he use the time from 8 till 11 every second weekday which would be like 9 hours a week instead of watching yet another stupid series on TV or doing some other meanignless task? I get really irritated when hearing keyword “no time”… In fact they have much time but they just use it for things they consider to be more important than their self development.

No plans for self development

They sometimes do not know what are their plans; conversation often goes like this when recruiting for QA:
– what would you like to do in your next job? what would it be most interesting for you?
– I can continue testing like here at my current position, I think I would like to start automation… well hmm… I think I like programming… yes I think I would like to become developer…
(- so why are you applying for QA then???)
– ok so what do you do to learn programming now?
– well, I am reading a book but I do not have time because I have much work at my job and I cannot program as I am testing only
(and here we go again: “no time management”)

Major problems

Problem 1: People do not have knowledge

My basic question is about test design techniques. Around 50% of my interviewees do not understand the question (I need to give example so that we can continue), while around 40% is able to name just boundary value analysis and equivalence partitioning. I would say only 10% is able to say something about decision tables, state transition diagrams and few people know pair wise testing. When asking a question how they are testing things then, I can hear very often about experienced based techniques. Are they all experts really… ?

Problem 2: people do not understand what they know

There is about 40% which can talk about basic test design techniques.

so which test design techniques do you know?
– oh… ISTQB question? Yes I know, we have equivalence and boundaries. Yes I know all these. It’s been half year now since I was taking the exam so I can’t remember exactly all the definitions
(- so are you aware of the techniques which are there to help you with testing problems or you are just shooting with test cases intuitively?)
– are there any others?
– well there is some state testing technique…
– what is it about?
– hmm I cannot remember

Problem 3: If they seem to understand, people most often do not apply their knowledge

Sometimes one can hear the candidate understands the stuff and is able to talk about equivalence partitioning, boundary value analysis, decision tables, state transition diagrams. But when asked which test design technique helps us with sequences of actions like we have when testing desktop GUI, such a candidate cannot provide the right answer.
Testing techniques are there to help us with finding a solution for a various testing problems and will in turn give us reasonable coverage and therefore confidence that probability of a defect is very low; if you are working ad hoc without any approach the result of your work will always be unpredictible and you will not be able to increase the quality of the product. However, people think of the test design techniques as of some theoretical knowledge which you should know but they are unable to use it in their work tasks.
I think this is the most difficult step to apply things in practice but we just have to be making it consciously every day.

Conclusion? Do not be a junior!

So, first get the knowledge, then understand it and then apply it in your everyday work. It is the basics but it will immediately bring you the results and will make you stand out of the crowd of juniors.