Software Testing notes

"If debugging is the process of removing software bugs, then programming must be the process of putting them in." - Dijkstra
 
It sure is safe to say that software is ubiquitous and touches almost everyone on this planet either directly or otherwise. The software we help produce has the potential for wide reaching impact. A chief determinant of the nature of impact, whether positive or negative, hinges on the quality of the software we produce. Testing directly contributes to improving quality of software by way of detecting defects and enabling these to be addressed before the product ships.

When defects are not detected and consequently left un-addressed, failures result during operation of the software. These failures can have significant cost implications for the producer of the software which may include (and not restricted to) costs to address these issues that are reported back from customers and issue patches, loss of customer confidence and credibility, loss or corruption of data which in turn has potential for causing much damage, legal implications due to failure or non-compliance, and many other repercussions which are best avoided by taking pro-active steps to prevent and eliminate defects during software production. A relatively smaller investment in preventing software defects from shipping out helps to avoid spending a much larger amount later, on handling the consequences of those defects.

So, what is the purpose of testing? We may summarize the purpose of testing into the following three points.

1.    To validate conformance of the software to the business requirements
2.    To verify conformance of the software to the design and specifications
3.    To find errors

An important element in finding errors is timing. The sooner in the development cycle when errors are detected, the less expensive it is to fix them. Studies show that the longer errors remain undetected across the development life cycle, the greater is the cost of fixing these later. Different statistics provide varying estimates of cost involved in fixing defects at different stages of software development, testing and deployment. However, all of them agree that the cost is least for defects identified during the initial requirements stage, increasing thereon for every subsequent stage such as design, implementation, testing and the highest for defects found after the product has shipped. For example, an estimate of the cost of fixing defects post release is said to be over 40 times the cost of fixing them at the requirements stage. 

The point here is that for software testing to be of greater value to the business, testing must not be relegated to the fag end of the development life cycle to come in only post implementation. The earlier you have testing engaged, the greater the defects that may be prevented from being carried over across development phases and lesser the cost to address them.

Software Testing notes

Program testing can be used to show the presence of bugs, but never to show their absence” - Dijkstra

One of the fundamental principles in software testing is that testing can be used to show the presence of errors, but not their absence. To prove that the software is free of defects would require the system to be tested completely. Complete testing would include tasks such as, testing the system with - every possible input value that it can take, every combination of inputs that are possible to be passed in, every possible path of execution, every possible compatibility scenario, every possible interactions with other components be it software, hardware or human, every combination and version of dependencies, every possible situation in which the system may be used, etc. The entire space of what is possible to test is infinite for a non-trivial software system.

It is not just the number of tests that are infinite, in most cases the number of possible input values themselves could be infinite. Even if you were to consider a very simple input field which accepts just a set of numbers, it would require testing of all the valid numbers that will be accepted as well as all the invalid numbers which are either less than the least or greater than the greatest number that the field is supposed to accept. Similarly, when you have a set of input fields where user data is accepted, every combination of input values both valid and invalid that may be passed to these fields need to be tested for testing to be truly complete. If you thought that testing with such an extremely large set of input values were enough, think again. Additional tests may be added to test for scenarios involving editing or altering of values as they are being entered or delaying entry of values to check for time-out handling and so on.  Even if one were to embark on an attempt to do complete testing, the fact is that software testing is not an isolated function with unlimited time and budget at its disposal. Testers in the real world are required to complete testing in a set amount of time and within budget. Complete testing almost never fits within these boundaries.

Given the fact that on one hand you cannot truly state that there are no defects in the system until you have tested it completely while on the other hand complete testing is not practically possible, it is likely that testing may be viewed as a fundamentally flawed process. While there are an infinite number of potential defects in a software system of non-trivial complexity and size, testing can theoretically only provide an infinitely small level of quantitative confidence in the quality of the software. So, would it be right to state that software testing is not useful ?