Static Testing

Static testing is a form of testing that does not involve execution of the application to be tested. Static testing involves going through the application’s code to check mainly for – conformance to functional requirements, conformance to design, missed functionality and coding errors.  Static testing is generally effective in finding errors in logic and coding.

Static testing may be performed by humans as well as software tools. Here, we look at static testing performed by humans. There are three main types of static testing that are performed.
  1. Desk checking
  2. Code walkthrough
  3. Code inspection
While desk-checking is performed by the author of the code who reviews his/her portion of code, the other two techniques of walkthrough and inspection involve a group of people apart from the author of the code performing the review.

Normally, static testing is performed in the time frame between when the application is coded and dynamic testing begins. Static tests may be performed even earlier as parts of your application are developed or even at earlier stages in the development life-cycle to review design and so on.

Static testing techniques help find errors sooner. According to the generally accepted belief on costs of fixing errors, the sooner the errors are identified the less expensive they are to fix. Errors found during static testing are quicker and cheaper to fix than if they were found in a formal dynamic testing phase or even later.

Errors found during static testing can often be precisely pin-pointed to a particular location in the code. Also, going by the general tendency of defects to cluster together, it is likely that we can identify error clusters and their location, to be addressed as a batch. This quality of static testing contrasts with dynamic testing and chiefly black-box testing techniques which tend to highlight the symptom of an error. For example, one may observe errors during program execution such as incorrect validation of inputs or crashes which represent symptoms of the underlying error condition.  Further debugging is needed to ascertain the location of the error and address it.

Static testing is sometimes criticized on the grounds that it cannot unearth “all types” of defects or defects that are complex and so on. Static testing is useful to find certain types of errors more quickly or effectively than dynamic testing. It is not a case of having to choose between either static testing or dynamic software testing; both techniques are complementary and together help improve quality. 

IEEE Std 1044-2009 (Revision of IEEE Std 1044-1993)

IEEE Std 1044 is better known as the IEEE Standard Classification for Software Anomalies.

In this blog entry, we re-visit the definition of the term “anomaly” as per the IEEE Std 1044. In an earlier blog entry we had looked at the definition of an anomaly as defined in the earlier revision of the IEEE Std 1044 (1044-1993). Recently a reader of this blog wrote to me to ask about the definition as per the latest revision of the IEEE Std 1044 (1044-2009). Hence, this updated post where we look at the definition as per IEEE Std 1044-2009.

For those of you who are hearing about this for the first time, here's a very brief summary of what the IEEE Std 1044-2009 is about. This standard provides a uniform approach to the classification of software anomalies, regardless of when they originate or when they are encountered within the project, product, or system life cycle. Data thus classified may be used for a variety of purposes, including defect causal analysis, project management, and software process improvement.

As per the standard, “The word “anomaly” may be used to refer to any abnormality, irregularity, inconsistency, or variance from expectations. It may be used to refer to a condition or an event, to an appearance or a behavior, to a form or a function.”

The previous version of the standard (1044-1993) described the term “anomaly” to be equivalent to error, fault, failure, incident, flaw, problem, gripe, glitch, defect or bug which essentially removed focus from the distinction amongst these terms. While these terms could be used fairly inter-changeably in face-to-face communication wherein any ambiguity  regarding their meaning is resolved by the richness of the direct communication mechanism, it is generally not conducive to other non-direct methods. For preciseness in communication, specific terms are defined and used to refer to more narrowly defined entities such as defect, error, failure, fault and problem.

***
Join my community of professional testers to receive free updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Email subscriptions are managed by Google's FeedBurner service.

Share & Bookmark this blog entry

Stress Testing

In today's post, we look briefly at Stress Testing. As testers, we often come across the terms Performance, Load and Stress testing which are sometimes used inter-changeably. Here's a perspective on Stress Testing. 

What is Stress testing ?

Stress testing is a type of testing carried out to evaluate a system's behavior when it is pushed beyond its normal load or operational capacity. In other words, this involves subjecting a system to heavy loads that are beyond what the system is expected to handle normally, often to the point where the system breaks or is unable to handle further load. Stress testing can be a two pronged approach of increasing the load while denying sufficient resources which the system may need to handle the load. 

Stress testing helps to determine the robustness, scalability and error handling capability of the system under test. While some stress tests may represent scenarios that the system may be expected to experience during its use, some other stress tests may represent scenarios that are not likely to be encountered. Both of these types of tests can be useful if they help in identifying errors. An error that shows up in a stress test could show up in a real usage situation under lesser load.

What is the purpose of Stress testing ?

 
While not exhaustive, Stress testing helps to determine -

  • the ability of the system to cope with stress and changes/the extreme and unfavorable circumstances in it's operating environment
  • the maximum capacity of the system / it's limits
  • bottlenecks in the system or its environment
  • how the system behaves under conditions of excessive load or insufficient resources to handle the excess load
  • the circumstances in which the system will fail, how it will fail and what attributes need to be monitored to have an advance warning of possible failure
  • what happens in case of failure -
    • does the system slow down or crash or hang up / freeze
    • is the failure graceful or abrupt
    • is there any loss or corruption of data
    • is there any loss of functionality
    • are there any security holes that are open
    • does the system recover from failure gracefully back to its last known good state
    • does the system provide clear error messages and logging or just print indecipherable codes
***
Join my community of professional testers to receive free updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Email subscriptions are managed by Google's FeedBurner service.

Share & Bookmark this blog entry

Technology focussed test automation - pitfall

Ever been in a situation where your test automation project was assigned to someone who was most interested in technology and coding and wanted to get away from the "routine" of testing ? Nothing wrong in being technically inclined and getting bored occasionally with testing! (Read more here on dealing with boredom in testing) 

However, what normally happens is that an engineer or a set of engineers who seem to demonstrate the most propensity to pick up a new tool / technology and run with it while wanting to get away from the regular testing tasks, are handed over the reins of test automation. Oftentimes what is observed is that the output of such an automation effort tends to be less than desirable from a testing perspective. What do i mean ? How can we have poor automation when employing our "star" technical resources ? Note the point that i am making - the probability of ending up with poor automation is higher in such a scenario where the focus is mainly on technology or tools used in automating rather than trying to solve the testing problem well. 

Who would you assign to do test automation ? The answer to that question is a key determinant to test automation success or failure. Agreed, it is not the sole determinant. However, it does play a very significant role. A common situation that one may observe while embarking on test automation is an excessive focus on tools or technology used to automate tests. Now, how could this be a negative factor in automation ? Isn't it a good thing to be keenly focussed on the technology used in automating tests ? Yes and No. Yes, since it is important to identify the right set of tools and technologies to automate your tests. You would not want to embark on an automation exercise only to meet roadblocks as the tool proves incapable of meeting your specific requirements. Now that you have the necessary tools, will it pose a problem if you continue to be focussed on technology used to automate tests ? Focus on technology is not bad in itself unless that focus makes you lose sight of the bigger picture, which is the testing problem you are trying to solve using that technology.

It is easy to become fascinated with the workings of a tool or technology and dive deep while losing focus on the reason for using that tool. Not convinced such a thing could occur ? From my own experiences across several projects, i have come across instances where a person in testing who has an interest in coding, solving technical challenges and often is bored with the "routine nature" of testing is asked to automate tests. This individual usually tends to get excited about the chance to do something "challenging" and "technical" as opposed to "just" testing. 

What happens there on is that often this employee who is automating tests gets caught up in trying to show the world what a great technical resource he/she is and creates suites that may not be - maintainable in the event that this employee leaves the group, documented enough for someone else to understand the suite easily, effective from a testing point of view. True it might have several technical bells and whistles, look complex and shiny but at the end of the day what counts is what the suite can do. If the sole focus while automating has been the technology used, the end result may not justify investments in automation.

Also, the technical resource you might have handed the automation work to could be a wannabe developer who is keen on sharepening his/her coding skills before moving on. Unless you have a well designed, maintainable and useful test automation framework/suite, your investment in automation is not justified. Some folks tend to argue that they have not spent a penny in purchasing or licensing any automation tools and hence have not really invested as much in automation. They have used open source or free tools. However, tool costs are one part of the picture. True they represent a sizeable chunk of the investment costs. However, you must account for employee costs as well as opportunity costs of automation. What could you have done if you had not pursued this path of ineffective or throw-away test automation ? What if you had chosen to put the right resources and manage the automation effectively ? What are the costs in terms of time spent by employee(s) in automating ? What are the costs involved in having to learn how this automation suite works and then maintain it ? Will changes to the product being tested or any of its dependencies break the automation suite and if so what costs will need to be incurred to make the suite usable ? These any many such questions need to be considered when evaluating returns from your investment in test automation.

For automation to succeed, the people involved should be carefully chosen and managed. An ad-hoc or improvise-as-you-go method will not produce a robust framework. Detailed planning, design and development following sound and standard practices and principles is essential. Ultimately, the tool or technology you choose is an enabler and not the reason for automation.
***
Join my community of professional testers to receive free updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Email subscriptions are managed by Google's FeedBurner service.

Share & Bookmark this blog entry