Software Testing notes

"If debugging is the process of removing software bugs, then programming must be the process of putting them in." - Dijkstra
 
It sure is safe to say that software is ubiquitous and touches almost everyone on this planet either directly or otherwise. The software we help produce has the potential for wide reaching impact. A chief determinant of the nature of impact, whether positive or negative, hinges on the quality of the software we produce. Testing directly contributes to improving quality of software by way of detecting defects and enabling these to be addressed before the product ships.

When defects are not detected and consequently left un-addressed, failures result during operation of the software. These failures can have significant cost implications for the producer of the software which may include (and not restricted to) costs to address these issues that are reported back from customers and issue patches, loss of customer confidence and credibility, loss or corruption of data which in turn has potential for causing much damage, legal implications due to failure or non-compliance, and many other repercussions which are best avoided by taking pro-active steps to prevent and eliminate defects during software production. A relatively smaller investment in preventing software defects from shipping out helps to avoid spending a much larger amount later, on handling the consequences of those defects.

So, what is the purpose of testing? We may summarize the purpose of testing into the following three points.

1.    To validate conformance of the software to the business requirements
2.    To verify conformance of the software to the design and specifications
3.    To find errors

An important element in finding errors is timing. The sooner in the development cycle when errors are detected, the less expensive it is to fix them. Studies show that the longer errors remain undetected across the development life cycle, the greater is the cost of fixing these later. Different statistics provide varying estimates of cost involved in fixing defects at different stages of software development, testing and deployment. However, all of them agree that the cost is least for defects identified during the initial requirements stage, increasing thereon for every subsequent stage such as design, implementation, testing and the highest for defects found after the product has shipped. For example, an estimate of the cost of fixing defects post release is said to be over 40 times the cost of fixing them at the requirements stage. 

The point here is that for software testing to be of greater value to the business, testing must not be relegated to the fag end of the development life cycle to come in only post implementation. The earlier you have testing engaged, the greater the defects that may be prevented from being carried over across development phases and lesser the cost to address them.

Software Testing notes

Program testing can be used to show the presence of bugs, but never to show their absence” - Dijkstra

One of the fundamental principles in software testing is that testing can be used to show the presence of errors, but not their absence. To prove that the software is free of defects would require the system to be tested completely. Complete testing would include tasks such as, testing the system with - every possible input value that it can take, every combination of inputs that are possible to be passed in, every possible path of execution, every possible compatibility scenario, every possible interactions with other components be it software, hardware or human, every combination and version of dependencies, every possible situation in which the system may be used, etc. The entire space of what is possible to test is infinite for a non-trivial software system.

It is not just the number of tests that are infinite, in most cases the number of possible input values themselves could be infinite. Even if you were to consider a very simple input field which accepts just a set of numbers, it would require testing of all the valid numbers that will be accepted as well as all the invalid numbers which are either less than the least or greater than the greatest number that the field is supposed to accept. Similarly, when you have a set of input fields where user data is accepted, every combination of input values both valid and invalid that may be passed to these fields need to be tested for testing to be truly complete. If you thought that testing with such an extremely large set of input values were enough, think again. Additional tests may be added to test for scenarios involving editing or altering of values as they are being entered or delaying entry of values to check for time-out handling and so on.  Even if one were to embark on an attempt to do complete testing, the fact is that software testing is not an isolated function with unlimited time and budget at its disposal. Testers in the real world are required to complete testing in a set amount of time and within budget. Complete testing almost never fits within these boundaries.

Given the fact that on one hand you cannot truly state that there are no defects in the system until you have tested it completely while on the other hand complete testing is not practically possible, it is likely that testing may be viewed as a fundamentally flawed process. While there are an infinite number of potential defects in a software system of non-trivial complexity and size, testing can theoretically only provide an infinitely small level of quantitative confidence in the quality of the software. So, would it be right to state that software testing is not useful ?

The Test Strategy and the Test Plan

Some more thoughts on test strategy and test plan.

Strategy follows vision. Vision may be viewed as the desired future which is sought to be created or a desired state that is sought to be reached. From a testing perspective, when defining a vision for your test group, it might be easier to articulate the vision in terms of what you envisage your group - to be, to provide, for whom and possibly when. This is not a template but a little aid to help with thinking about your vision. 

The vision helps the organization focus on its long term goals and where/what it wants to be. The (test) strategy provides a road-map or approach to achieving these goals. The strategy answers questions such as what do we do to reach our objectives and identifying the means to achieve them. The (test) plan however is tactical in its approach. The plan looks at deploying the means described by the strategy to achieve the desired ends. The plan is mainly about the how of testing and test execution.

At what level does a test strategy reside? It depends. There are test strategies that are defined at an organization level and strategies defined for a specific project or product. The nature of the testing organization, whether centralized or de-centralized will also have some bearing on how strategies are defined. In a centralized structure, the chances of having a common high level strategy is definitely greater when compared to a de-centralized structure where the inclination to have a project or product specific strategy is greater. A test strategy would include items such as the approach to testing, the test design methodologies to use, techniques and tools to use, the levels and types of testing to be performed, reporting requirements, configuration/change management, defect tracking and reporting, etc.

Is the test plan part of the test strategy or vice versa? It depends on whom you ask and what they think these terms mean. There are folks who think a plan derives from a strategy while there are folks who believe that the test strategy is part of a test plan. In some cases the strategy is formulated per feature of a product too. It is important to ensure that everyone involved understands what is implied by these terms and how these tie in with your group's activities and objectives.

Static Testing

Static testing is a form of testing that does not involve execution of the application to be tested. Static testing involves going through the application’s code to check mainly for – conformance to functional requirements, conformance to design, missed functionality and coding errors.  Static testing is generally effective in finding errors in logic and coding.

Static testing may be performed by humans as well as software tools. Here, we look at static testing performed by humans. There are three main types of static testing that are performed.
  1. Desk checking
  2. Code walkthrough
  3. Code inspection
While desk-checking is performed by the author of the code who reviews his/her portion of code, the other two techniques of walkthrough and inspection involve a group of people apart from the author of the code performing the review.

Normally, static testing is performed in the time frame between when the application is coded and dynamic testing begins. Static tests may be performed even earlier as parts of your application are developed or even at earlier stages in the development life-cycle to review design and so on.

Static testing techniques help find errors sooner. According to the generally accepted belief on costs of fixing errors, the sooner the errors are identified the less expensive they are to fix. Errors found during static testing are quicker and cheaper to fix than if they were found in a formal dynamic testing phase or even later.

Errors found during static testing can often be precisely pin-pointed to a particular location in the code. Also, going by the general tendency of defects to cluster together, it is likely that we can identify error clusters and their location, to be addressed as a batch. This quality of static testing contrasts with dynamic testing and chiefly black-box testing techniques which tend to highlight the symptom of an error. For example, one may observe errors during program execution such as incorrect validation of inputs or crashes which represent symptoms of the underlying error condition.  Further debugging is needed to ascertain the location of the error and address it.

Static testing is sometimes criticized on the grounds that it cannot unearth “all types” of defects or defects that are complex and so on. Static testing is useful to find certain types of errors more quickly or effectively than dynamic testing. It is not a case of having to choose between either static testing or dynamic software testing; both techniques are complementary and together help improve quality. 

IEEE Std 1044-2009 (Revision of IEEE Std 1044-1993)

IEEE Std 1044 is better known as the IEEE Standard Classification for Software Anomalies.

In this blog entry, we re-visit the definition of the term “anomaly” as per the IEEE Std 1044. In an earlier blog entry we had looked at the definition of an anomaly as defined in the earlier revision of the IEEE Std 1044 (1044-1993). Recently a reader of this blog wrote to me to ask about the definition as per the latest revision of the IEEE Std 1044 (1044-2009). Hence, this updated post where we look at the definition as per IEEE Std 1044-2009.

For those of you who are hearing about this for the first time, here's a very brief summary of what the IEEE Std 1044-2009 is about. This standard provides a uniform approach to the classification of software anomalies, regardless of when they originate or when they are encountered within the project, product, or system life cycle. Data thus classified may be used for a variety of purposes, including defect causal analysis, project management, and software process improvement.

As per the standard, “The word “anomaly” may be used to refer to any abnormality, irregularity, inconsistency, or variance from expectations. It may be used to refer to a condition or an event, to an appearance or a behavior, to a form or a function.”

The previous version of the standard (1044-1993) described the term “anomaly” to be equivalent to error, fault, failure, incident, flaw, problem, gripe, glitch, defect or bug which essentially removed focus from the distinction amongst these terms. While these terms could be used fairly inter-changeably in face-to-face communication wherein any ambiguity  regarding their meaning is resolved by the richness of the direct communication mechanism, it is generally not conducive to other non-direct methods. For preciseness in communication, specific terms are defined and used to refer to more narrowly defined entities such as defect, error, failure, fault and problem.

***
Join my community of professional testers to receive free updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Email subscriptions are managed by Google's FeedBurner service.

Share & Bookmark this blog entry

Stress Testing

In today's post, we look briefly at Stress Testing. As testers, we often come across the terms Performance, Load and Stress testing which are sometimes used inter-changeably. Here's a perspective on Stress Testing. 

What is Stress testing ?

Stress testing is a type of testing carried out to evaluate a system's behavior when it is pushed beyond its normal load or operational capacity. In other words, this involves subjecting a system to heavy loads that are beyond what the system is expected to handle normally, often to the point where the system breaks or is unable to handle further load. Stress testing can be a two pronged approach of increasing the load while denying sufficient resources which the system may need to handle the load. 

Stress testing helps to determine the robustness, scalability and error handling capability of the system under test. While some stress tests may represent scenarios that the system may be expected to experience during its use, some other stress tests may represent scenarios that are not likely to be encountered. Both of these types of tests can be useful if they help in identifying errors. An error that shows up in a stress test could show up in a real usage situation under lesser load.

What is the purpose of Stress testing ?

 
While not exhaustive, Stress testing helps to determine -

  • the ability of the system to cope with stress and changes/the extreme and unfavorable circumstances in it's operating environment
  • the maximum capacity of the system / it's limits
  • bottlenecks in the system or its environment
  • how the system behaves under conditions of excessive load or insufficient resources to handle the excess load
  • the circumstances in which the system will fail, how it will fail and what attributes need to be monitored to have an advance warning of possible failure
  • what happens in case of failure -
    • does the system slow down or crash or hang up / freeze
    • is the failure graceful or abrupt
    • is there any loss or corruption of data
    • is there any loss of functionality
    • are there any security holes that are open
    • does the system recover from failure gracefully back to its last known good state
    • does the system provide clear error messages and logging or just print indecipherable codes
***
Join my community of professional testers to receive free updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Email subscriptions are managed by Google's FeedBurner service.

Share & Bookmark this blog entry

Technology focussed test automation - pitfall

Ever been in a situation where your test automation project was assigned to someone who was most interested in technology and coding and wanted to get away from the "routine" of testing ? Nothing wrong in being technically inclined and getting bored occasionally with testing! (Read more here on dealing with boredom in testing) 

However, what normally happens is that an engineer or a set of engineers who seem to demonstrate the most propensity to pick up a new tool / technology and run with it while wanting to get away from the regular testing tasks, are handed over the reins of test automation. Oftentimes what is observed is that the output of such an automation effort tends to be less than desirable from a testing perspective. What do i mean ? How can we have poor automation when employing our "star" technical resources ? Note the point that i am making - the probability of ending up with poor automation is higher in such a scenario where the focus is mainly on technology or tools used in automating rather than trying to solve the testing problem well. 

Who would you assign to do test automation ? The answer to that question is a key determinant to test automation success or failure. Agreed, it is not the sole determinant. However, it does play a very significant role. A common situation that one may observe while embarking on test automation is an excessive focus on tools or technology used to automate tests. Now, how could this be a negative factor in automation ? Isn't it a good thing to be keenly focussed on the technology used in automating tests ? Yes and No. Yes, since it is important to identify the right set of tools and technologies to automate your tests. You would not want to embark on an automation exercise only to meet roadblocks as the tool proves incapable of meeting your specific requirements. Now that you have the necessary tools, will it pose a problem if you continue to be focussed on technology used to automate tests ? Focus on technology is not bad in itself unless that focus makes you lose sight of the bigger picture, which is the testing problem you are trying to solve using that technology.

It is easy to become fascinated with the workings of a tool or technology and dive deep while losing focus on the reason for using that tool. Not convinced such a thing could occur ? From my own experiences across several projects, i have come across instances where a person in testing who has an interest in coding, solving technical challenges and often is bored with the "routine nature" of testing is asked to automate tests. This individual usually tends to get excited about the chance to do something "challenging" and "technical" as opposed to "just" testing. 

What happens there on is that often this employee who is automating tests gets caught up in trying to show the world what a great technical resource he/she is and creates suites that may not be - maintainable in the event that this employee leaves the group, documented enough for someone else to understand the suite easily, effective from a testing point of view. True it might have several technical bells and whistles, look complex and shiny but at the end of the day what counts is what the suite can do. If the sole focus while automating has been the technology used, the end result may not justify investments in automation.

Also, the technical resource you might have handed the automation work to could be a wannabe developer who is keen on sharepening his/her coding skills before moving on. Unless you have a well designed, maintainable and useful test automation framework/suite, your investment in automation is not justified. Some folks tend to argue that they have not spent a penny in purchasing or licensing any automation tools and hence have not really invested as much in automation. They have used open source or free tools. However, tool costs are one part of the picture. True they represent a sizeable chunk of the investment costs. However, you must account for employee costs as well as opportunity costs of automation. What could you have done if you had not pursued this path of ineffective or throw-away test automation ? What if you had chosen to put the right resources and manage the automation effectively ? What are the costs in terms of time spent by employee(s) in automating ? What are the costs involved in having to learn how this automation suite works and then maintain it ? Will changes to the product being tested or any of its dependencies break the automation suite and if so what costs will need to be incurred to make the suite usable ? These any many such questions need to be considered when evaluating returns from your investment in test automation.

For automation to succeed, the people involved should be carefully chosen and managed. An ad-hoc or improvise-as-you-go method will not produce a robust framework. Detailed planning, design and development following sound and standard practices and principles is essential. Ultimately, the tool or technology you choose is an enabler and not the reason for automation.
***
Join my community of professional testers to receive free updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Email subscriptions are managed by Google's FeedBurner service.

Share & Bookmark this blog entry

Software Testing - discovering and exploring !

What is software testing like? Is it for folks who got rejected in development? Is it for those who are without a job and looking for a toehold? Or is it meant to be a stepping stone to perhaps, "greener" pastures in the world of development? These questions crop up every once in a while when interacting with software testers or potential testers.

The fact of the matter is that there will always be cross-functional movement. For example, there will be a few folks from software testing who will fantasize about software development and make attempts to move at an available opportunity. This isn't necessarily bad and there is no need to berate your selection process much. Of course, if folks are jumping ship in droves, your selection process for testers surely needs a close look. Once new folks come aboard the testing bandwagon, it is a matter of time before they figure out whether they are truly cut out to be testers or would rather "follow the herd" so to speak. I am not trying to indulge in berating development or show testing as a superior alternative to any other function.

Software testing, just like software development is a valuable piece of the whole that enables a great product to be produced. You cannot do without either of the functions. You could use the analogy of your own body to denote their significance. A hand is no more or less important than a leg or the brain more or less than the heart - there may be special situations where one may be used more; however, on the whole the entire body is complete and healthy when all its component parts work together.

Back to our original question - what is software testing like and what kind of folks would find testing interesting? Software testing is about discovery. Testing, more than any other function involved in producing software, is about discovery. Software professionals who love discovering and exploring are likely to feel at home in testing. Testers are constantly trying to discover what they do not know in the system. In the words of Philip Armour, "The challenge in testing systems is that testers are trying to develop a way to find out if they don’t know that they don’t know something. This is equivalent to a group of scientists trying to devise an experiment to reveal something they are not looking for. It is extremely difficult to do." He goes on to state that much of what we call as method or process in testing involves heuristic strategies.

We selectively test complex predicate logic, we create test cases that span the classes of inputs and outputs, we construct combinations of conditions, we press the system to its boundaries both internally and externally, we devise weird combinations of situations that might never occur in the real world, but which we think might expose a so-far unknown limitation in the system. None of these are guaranteed to throw a defect. In fact, nothing in testing is guaranteed, since we don’t really know what we are looking for. We are just looking for something that tells us we don’t know something. Often this is obvious, as when the system crashes; sometimes it is quite subtle.

If all this talk of discovery and exploration sounds too much, you might want to re-consider your choice of testing as a career and probably look at other options where things are clearly stated and probably involve such exciting tasks such as translating defined specs into a computer language following pre-defined coding standards and guidelines or regularly fixing issues in the code you or someone else wrote. Discovering is definitely not for everyone.
***
Join my community of professional testers to receive free updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Email subscriptions are managed by Google's FeedBurner service.

Share & Bookmark this blog entry

Image: Salvatore Vuono / FreeDigitalPhotos.net

Software Testing - Time machine

Looking back at some entries on this blog from an year ago.
  • A brief look at the theory that test automation might eliminate the need for manual testers: Read it here
  • A look at errors introduced in test automation vis-a-vis manual testers: Read it here
  • A look at the age-old question about QA vs QC: Read it here
  • And an exhaustive look at the question, "What is Quality?": Read it here 
***
Join my community of professional testers to receive free updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Email subscriptions are managed by Google's FeedBurner service.

Share & Bookmark this blog entry

Image: Salvatore Vuono / FreeDigitalPhotos.net

Software Testing: Theory on Defect Detection

Theory: A defect can only be discovered in an environment that contains the defect. 

This seems very obvious! So why even bother to mention it, let alone post a blog entry about it?  The motivation for this entry comes from a familiar situation that i am sure most testers would have encountered. 

Software testers devise extensive sets of tests and execute them prior to a product's release. These tests are usually executed on setups that testers have prepared in their lab environment. Many organizations may have invested significant sums of money to have the lab infrastructure in place. However, what generally tends to happen is that when this product is released, the customer reports issues pretty quickly. So, what happened? What happened to all the man-hours spent on testing and the investment on expensive lab equipment? Why did our in-house testing efforts not show up these defects that a customer seemed to find "easily"? What are we doing wrong?

This brings us to our theory, i.e. a defect can only be discovered in a system or environment that contains the defect. A defect that might show up in a customer environment may not manifest itself in a sanitized lab environment. Often our lab environment is setup and controlled based on our view of how the product is likely to be used. Within the confines of the boundaries we have defined, we execute our battery of tests and feel confident when the tests run without reporting issues. However, a customer's environment suffers from no such boundary constraints and does not find it hard to expose a defect. Defects may not necessarily be in the product itself. It could arise from the interactions of the product with its operating environment, dependencies, usage, etc.

Therefore, unless we are able to replicate in sufficient detail the customer environment and the likely real-world usage scenarios after the product is released, we will likely continue to see an increasing trend of customer reported issues.
***
Join my community of professional testers to receive free updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Email subscriptions are managed by Google's FeedBurner service.

Share & Bookmark this blog entry
Image: Salvatore Vuono / FreeDigitalPhotos.net

Software Testing and the hammer and nail approach

If all you have is a hammer, every problem looks like a nail.

It is sometimes a similar situation with software testing and specifically with test automation. This may not be much of an issue for single-product companies but when your organization produces a range of products and has multiple teams of testers handling these different products then the likelihood of hitting the hammer and nail problem arises.

One might say that the problem is prevalent more in a centralized testing structure than in a decentralized structure. The issue I am alluding to is the mandate to use a common test tool, mostly for automation since that is an area that everyone would like to see standardized but poses most challenges to standardization.

I have come across many instances where there is the attempt to identify a common tool that can satisfy all the requirements for automating various products being tested by different test teams. In some cases, the products would be as different as chalk and cheese. For example, there was this situation where we had a suite of products addressing different customer needs. This suite has a thick client application written in Java, a Win 32 application, a complex Ajax based web application, a few server products that you interact with using CLI and APIs, some middle ware products and few mobile applications.

Trying to find a common tool to handle all of the varied requirements for automating the different products could lead us to three possible ways of solving them.

1. One, you talk to a few tool vendors who would naturally promise the moon and claim that whatever tool they are selling can automate every kind of application that ever existed and will come into existence in the undetermined future. You could take this option like many folks do and purchase expensive tools and licenses and then mandate your testing teams to go use these (and only these) to automate in a standardized manner. Simple solution using a brute-force approach.

2. The second approach may be to take the compromise route. Here, you realize that a single tool may not be able to handle all the unique requirements of automating each product and go ahead to procure a tool that probably results in being the lowest common denominator solution. The tool ends up being  a good enough solution which essentially means that it is neither good nor enough from a testing perspective. However, the organization gets a standardized jack-of-all-trades kind of tool that all groups could use with varying degrees of effectiveness.

3. A third approach could be to take the time to understand the specific requirements and needs for automating each product, perform due diligence in identifying & then evaluating the tools that best fit the specific automation requirements before deciding on what tools to procure. This may result in having to procure more than one tool. If your products have similar automation requirements you may end up with needing just one tool but in cases where you have products with differing needs similar to what I stated as an example earlier, you might realize that more than one tool is needed to perform effective test automation.

Effectiveness, is the key here. Ultimately you automate not for the sake of automating but to support your testing campaign and deliver value. Trying to force teams to use a specific tool that does not fully support their needs is akin to the hammer and nail solution. When all you have is a hammer, then every problem is dealt with as you would a nail. Sometimes it is the right thing to do and in many cases it may not be the optimal approach to follow. In testing, such an approach could lead to teams bending their testing and automation practices around what the tool is capable of while ignoring other possibly important areas of the product which are harder to automate using the tool. The automation tool should not dictate how & what you test. It should truly be a tool (amongst other tools and techniques) in your arsenal enabling you to tackle the challenges of testing your software.
***
Join my community of professional testers to receive free updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone. Subscriptions are managed by Google's FeedBurner service.

Share & Bookmark this blog entry

Image: Carlos Porto / FreeDigitalPhotos.net

Software testers, sharpen your saw

How would you list the day in the life of a software tester ? Get in to work and test software ? I realize this is an over-simplification, but i am sure most lists would place testing as the task that would take up most or all of a tester's time. After all isn't that why testers are paid? Software testers are normally expected to test and utilize their time optimally to find defects in the software. Test all day and find issues. We live in a society that values busyness and activity. It is easy to get caught up in the frenzy of running tests and trying to find issues. Before you think that i am trying to advocate testers to not do testing, let me clarify - testers must test ! that is their primary job responsibility. However, testing isn't all that a tester must do if he/she must remain relevant and valuable in the future.
At this point i would like to digress a bit to touch upon a concept that is mentioned in Stephen Covey's book, The 7 Habits of Highly Effective People. It is called "Sharpen the Saw". What this means is to preserve and enhance the greatest asset you have, which is … you

The book talks about having a balanced program for self-renewal in the four dimensions of your life: physical, social/emotional, mental and spiritual. Self-renewal enables you to create growth and change in your life. Sharpening the saw keeps you fresh so you can increase your capacity to produce and handle challenges. The book goes on to say that without this renewal, the body becomes weak, the mind mechanical, the emotions raw, the spirit insensitive, and the person selfish. 

This concept of sharpen the saw finds expression in another oft repeated tale about two wood-cutters who get down to cutting trees using their saws. One wood-cutter goes to work and relentlessly keeps at cutting wood. He spends a lot of time and effort to continuously cut wood. 

The other wood-cutter cuts some wood and then takes a little time off from cutting wood, to sharpen his saw. He then goes back to his task of cutting wood. He does this repeatedly. At the end of the day, it is observed that the second wood-cutter has cut more wood (increased productivity), is more relaxed (less stressed) and has both himself and his tools in good shape to handle another day's tasks. 

The saw of the first wood-cutter gradually grew blunt with increased use. The reaction of this wood-cutter was to increase his own effort at cutting wood hoping that the increased effort on his part would compensate for the reducing sharpness. Needless to state the obvious, the first wood-cutter ended up feeling burned-out and tired and probably surprised that all his efforts resulted in less  than optimal results. He did have a lot of busy-time but results did not match his level of activity.

Whats all this got to do with Software Testing ?  I am sure most of you would have figured out the connection and where we are heading. It is true that testers must test, but that isn't all that a tester should do. The best testers realize the principle of sharpening the saw. They must take the time to continually develop their skills and work on their creativity and thinking. These testers strive to constantly be abreast of developments in their area. Self-development need not be limited to areas that are directly connected to testing; do not hesitate to look at those areas that may not seem in any way related to testing. You never know where you might find ideas that can be implemented in your work.
***
Liked this entry? Join my community of professional testers to receive fresh updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Your email id will remain confidential. Subscriptions are handled by Google's FeedBurner service.

Share & Bookmark this blog entry

Emotions and feelings in testing software

Software testers generally look at the requirements to figure out how the product must behave. Often these requirements cover the functional and some non-functional attributes including performance, security, some elements of usability, etc. Tests are developed with expected results that align with these product requirements. So far so good. There is a clear line from the written down requirements to the tests.

As testers, as you proceed with executing your tests, there may be instances where you feel irritated or frustrated with aspects of the software under test. You might feel a range of emotions as you test the software. At such times, what do you do ? Do you listen to your feelings or do you go with the script, merely looking for expected behavior as described in the test cases that you are executing. If the test produces the expected behavior, while you have experienced conflicting emotions during testing, what would you do ? 

As a software tester, do you need to give importance to your emotions and feelings that you encounter during testing or do you have to leave these "softer" aspects of your self outside the door before beginning a test campaign ? Is testing purely based on logic and written down requirements alone ? Is there any value to listening to what your inner "voice" and feelings are trying to say as you test the software ?

In my view, testers need to listen to their feelings while testing. That said, some testing may require you to just follow the script and stop at checking what is stated as expected behavior. However, the good news is that most testing will find feelings and emotions to be an useful added aid to the test effort. At this juncture, it helps to remember three basic concepts which are listed below.
  1. Plain definition of a bug: A "bug" is something that will "bug" someone who matters. This someone could most likely be users of your software or someone significant enough to have an impact on your organization
  2. Not all software requirements are written down or even stated. There is a significant number of requirements that are either left unstated or assumed or implied
  3. In many cases customers may not really know all that they need from the software at the outset when requirements are being firmed up. This is especially true in the case of attributes of software such as usability
If your emotions are telling you something, listen to them. If you feel frustrated while using your software; or are feeling confused at the non-intuitive interfaces; or are tired of waiting for your software to respond or process information; or are unhappy with the workflow or any other aspect of the software and if these are not explicitly stated in your requirements, do not ignore them just because your written set of requirements do not say anything about these experiences. If you are bugged by something in your software, it is likely that users of your software could be bugged too and that can have a more serious implication.

Something with the software that frustrates or irritates you can very well irritate or frustrate your product's users. In the interests of making your software more user-friendly and intuitive, allow your feelings to ride along as you test. Report any issues you find or improvements you would like.

The other thing to remember is that issues raised based on what you feel regarding the software may not always go well with your counter-parts in development. Some issues may be accepted and fixed in the current release if these are deemed to cause significant inconvenience or loss of functionality for the users. Some issues may just be deferred to a future release and some issues may be closed as not being bugs. 

If you truly believe that a bug is valid and severe enough (a function of the frequency of occurrence and impact) to merit attention, you should have a discussion with your developers or someone who can make decisions on the bug's status. It can be a great help if you have the ability to compare your software and its behavior or interfaces with a similar application elsewhere - this could be a competitive product or a substitute-able product offering. If such an opportunity for comparison exists and you are able to show how your software is lacking, it can boost your case to have the issues fixed. Also, having a customer representative put their weight behind the issues you raise is a big push for fixing the issue. 

Ultimately, realize that some issues will go unfixed. However, your pursuit to ensure that your customers have the best possible experience while using your organization's product should continue and towards that end, keep an eye open for what your emotions and feelings are telling you as you use and interact with the software you are testing.

A great piece of software is not just the one that meets all the functional requirements, but one that goes beyond in anticipating user needs and potential pain-points and tries to address them.
***
Liked this entry? Join my community of professional testers to receive fresh updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Your email id will remain confidential. Subscriptions are handled by Google's FeedBurner service.

Share & Bookmark

Image: FreeDigitalPhotos.net

Does Agile development need specialists ?

"Agile is to software development what poetry is to literature. 
It is 
very difficult to do well, very easy to do poorly, 
and most people 
don't understand 
why a good poem is good and a bad poem isn't...”
- from the web

Transition from a traditional development model to an agile methodology is often met with some degree of skepticism and doubts by testers. Books and programs on Agile development tend to emphasize the need for multi-skilled generalists who can take up different functional roles as needed. This tends to cause specialist testers to worry about losing their identity in an agile world. Is there a need for specialists in agile or is the agile world inhabited by generalists, the proverbial jack-of-all-trades ?

To answer this question, look no further than your favorite team sport. Agile software development is a team process involving members from the different functional groups coming together as part of one team to produce software. No longer are they members of distinct teams such as development, testing, technical writing, i18n, l10n, etc. Back to our earlier question on whether agile needs specialists or is the new world full of generalists ?

Like a team sporting activity, for a team to be successful it cannot be - 1) all members of one particular type or specialize in one activity e.g. development or testing alone 2) all members who are generalists and knowing part of all functions but not specialists of any function. How would you rate the chances of your favorite sports team if it were comprised of either types of members – a) all players who specialize in just one area e.g. of cricket which I follow: a team of just batsmen or just bowlers b) all generalists e.g. again of cricket: a team of just all-rounders – it might be better than the first choice with only players of one type but still not the best choice. An agile team takes a step towards success when it comprises specialists from the different functions coming together and working together collaboratively to produce software. Each member of the agile team brings to the table their unique set of skills that influence the software's development and success of the project.  However, an additional requirement for these specialists that are part of the agile team is to be able and willing to share in some of their non-core tasks. For example, a tester who can debug defects and make small fixes if need be, a developer who can also document their feature or do some testing, etc.

There is truth to the statement that agile needs generalists. However, it is better to have specialists who can go beyond their functional domains to help towards the release rather than specialists who just restrict their involvement to their functional area of specialization. Ultimately, agile development is a team effort and testers like others on the team must own the release from the beginning without merely trying to police it at the end.
***
Liked this entry? Join my community of professional testers to receive fresh updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Your email id will remain confidential. Subscriptions are handled by Google's FeedBurner service.

Share & Bookmark

Software Testing - Time Machine: from an year ago

"The past is behind, learn from it,
The future is ahead, prepare for it,
The present is here, live it"
- Thomas Monson

In this entry we look at a few of the blog entries that were published on this blog about an year ago. These entries touched upon some of the very basics of Software Testing. I hope you still find these useful and interesting.


***
Liked this entry? Join my community of professional testers to receive fresh updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Your email id will remain confidential. Subscriptions are handled by Google's FeedBurner service.

Agile development and selective implementation of methodologies

Abraham Lincoln said, “If you call a tail a leg, how many legs does a dog have? Four. Because calling it a leg doesn’t make it a leg.”

In the same vein, calling a project team agile does not really make it agile. Lets admit it. The term "agile" is a buzzword that is cool to use. Sometimes (or should it be oftentimes) folks pick up some aspects of a methodology that suits them and then attempt to fit it into their existing process with even worser results than before. And then they wonder why the new methodology isn't working ! The same situation occurs with adoption of agile. In this entry we look at some of the agile principles/practices that are easy to selectively pick and choose.

a) An agile team is capable of releasing the software to customers at the end of each iteration (generally between 2-4 weeks duration). Yes, be able to ship it to the customers, all developed, integrated, tested, wrapped up and mailed. Customers are able to see a working software that has features being available in increments and understand the progress being made. Of course, customers can provide quick feedback to enable any course corrections as needed too. Decisions can be made on whether additional features are to be added, existing functionality to be changed, or even to stop further development without having to wait for the complete release time frame.

When an organization selectively chooses to implement this aspect of agile without following the other aspects of the chosen agile methodology, it can lead to compressed release schedules and require squeezing of activities such as development and testing to get a product out sooner. Merely, compressing schedules will only lead to worser results than before while following non-agile methods.

b) Agile does away with distinct phases of development. With agile, you are no longer required to have distinct phases such as coding followed by testing and so on. This means that developers and testers (along with other required functions) work together in parallel as one team. However, this can be easily misinterpreted as a license to keep coding until the last minute when the release is due. And we can easily realize what happens when development continues coding till the end of the release while following a non-agile development methodology.

c) In the agile context, the software is a moving target. In agile, change is the norm. Developers can add new features or make changes at any time. In traditional methods, testers push for an early code freeze so they have time to test the software that is feature-complete. However in an agile context, this is not usually possible. This can have significant challenges for testing software. At the same time, this freedom can easily be misused by allowing developers to continue making random changes to the software at any time without fully embracing an agile methodology. It isn't too hard to imagine the consequences when we have developers making changes all through the release until the end while the organization follows a mostly non-agile methodology or a mix of agile and non-agile techniques.

d) Agile values working software over comprehensive documentation. This reduced emphasis on documentation in agile is substituted with increased human face-to-face communication and collaboration. For example in Scrum (an agile methodology) there are daily standup meetings where the team (developers, testers, etc.) get a common understanding of where each one on the team is, any obstacles to progress, achievements, etc. Also, techniques such as retrospective meetings, co-location and others foster communication which documents cannot match. It is however, easier to just pick this one aspect as an excuse to do away with documentation entirely or reduce it to an insignificant activity that sits on the back-burner. Needless to say, traditional methodology minus good documentation is a recipe for a poor quality product.

e) An agile software development team can add features in any order. Yes, but it can quickly get out of hand if this is not implemented right. In the context of agile development, features are added in the order that they make the most business sense. That is a significant change from allowing developers to choose features they wish to add. The natural propensity of developers would be to add features they think are best for them or easy or cool to add. This may not be the best order for the customers or business. Given the fact that there is limited time and resources available to deliver a product with a set number of features, it would seem that the best thing to do would be to add those features that have the most relevance to the customer/business within the available time. Focus on adding the important (from customer point of view) features as early as possible leaving the less important features towards the end of the project. That way, if the project were to run out of time or resources, we know that the features that could get dropped from the release are the ones that are of lesser priority.

These are some of the common agile principles that are easy to pick and adopt, albeit incorrectly and inappropriately. As Alexander Pope's poem states, "A little learning is a dang'rous thing; Drink deep, or taste not the Pierian spring: There shallow draughts intoxicate the brain, And drinking largely sobers us again …
***
Liked this entry? Join my community of professional testers to receive fresh updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Your email id will remain confidential. Subscriptions are handled by Google's FeedBurner service.

Software Testing & Buying stuff to create a new habit !

I enjoy reading and do not stop at software testing or technical subjects. My reading covers a variety of topics and sources which include other blogs, books, magazines, web sites, etc. Included in this category of content that I consume is - personal finance. Recently, while reading on this subject, i came across an article about "Buying stuff to create a new habit". It sure struck a chord and I could relate to what was being said in the article.

The article talks about a human tendency to buy stuff in the desire to create a new way of life or habit. For example, lets say that one day you feel attracted towards exercising. You decide to work-out at the gym regularly. You now enroll in a gym and the gym requires a minimum membership of say 6 months. At that moment, you are very passionate about exercising and pay up the necessary membership fees. You plan to get up early each morning and hit the gym for about an hour of fitness training and exercising.

Early next day morning you are all perked up and ready to begin a new routine. The first day at the gym feels great and you think you have made a good decision of enrolling. You think that this is something you want to do regularly from then on. If you are like most of us regular people, after a week or couple of weeks, you might find that it becomes harder to get off the bed in the morning and hit the gym. You find that you have a lot of tasks that require your attention and very little time to spend exercising. Or some other reason causes you to postpone going to the gym to another day. You might think that - "after all i am just skipping the gym visit for a day. I'll make up for it tomorrow; probably do a few extra sets of exercises." Before long, your passion and interest in exercising wanes and before you realize it you are busy with something else.

On hindsight, you might observe that you spent a significant sum of money on the gym membership (plus probably on other paraphernalia too, such as suitable clothing and shoes for working out at the gym) without really having used it much. You have actually lost a lot of money trying to chase a passing interest. Of course, there are exceptions and there are folks who continue to pursue such interests with sustained passion. On average, people tend to engage in similar activities (not necessarily exercising but could involve a range of other passions) that affect their financial well being without bringing in any real benefit.

There are many such examples and these are not hard to find. Look at your own lives and see if you can find such instances where you paid money or bought stuff in the hope that you would get better at something or be something /someone else or acquire a new habit ? For example, have you spent money on expensive sporting gear hoping that it would help improve your game ? Or enrolled for courses or programs in the hope that you would somehow be miraculously transformed into whatever you were hoping to be ? Or purchased some equipment or tools with a similar hope in mind ? I am sure that if we look hard enough, such instances would show up. This period of fascination with our new attraction is termed as the "honeymoon period". Due to our focus on this new activity or project we tend to think we will want to continue it for a long time to come. Consequently, we feel the urge to also equip ourselves for the long haul and "invest" in outfitting ourselves appropriately. 



If you are still reading this and wondering why I am talking about personal finance on a testing blog, worry not. I was just trying to see if I could draw a parallel to how organizations behave with regard to tools used in software testing and automation in particular. The organization may have set lofty goals for test automation and is looking to obtain a tool. It may be that a vendor has sold the decision makers with the virtues of their automation tool and convinced them about the extra-ordinary (probably bordering on the super-natural) capabilities of their automation software.

Sales pitches may expound the - "simple record and play" automation capabilities of the tool, ability to drastically cut down test cycle time from weeks or days to few hours, ability to reduce or even eliminate the need for human testers, ability to automate all of the tests, possibility of having a "one-click" automation suite up and running in very little time irrespective of the complexity and nature of the software being tested,  and many such entertaining comments. What happens next is that without much due diligence the organization pays significant sums of money to procure licenses for the tool and the mandate goes across the testing group to start automation using the shiny new toy. What are the chances that the tool will meet the requirements of the testing group, be compatible with the software being tested, have a short and easy learning curve, can automate the existing tests (web based, client server, GUI/CLI/API, etc), be able to handle the volume and load of real world testing, support the software requirements (such as multiple browsers and versions in the case of web based software, different OS platforms, different databases, various environments, etc), and satisfy many other details & needs that would determine whether the tool will really be of use or turn out to be a waste of money, time and resources.



From experience, i have come across instances where tools procured by various groups were either under-utilized or remained unused despite huge sums of money having been paid to procure them. Once realization dawns that the purchase was a mistake or not the right choice, it often tends to be a downhill ride from there for automation and usage of the tool. The impact of choosing an inappropriate tool can be the subject of another blog post. Suffice it to say that procuring a tool without analyzing your specific needs and requirements, evaluating various tools and vendors, performing due diligence, trying out a trial version of the tool against your product and setting the right expectations on automation, can be an expensive proposition.
***
Liked this entry? Join my community of professional testers to receive fresh updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Your email id will remain confidential. Subscriptions are handled by Google's FeedBurner service.

Software Testing: To fail is to succeed

Here's an entry that goes back to the basics of software testing. If you are wondering what the title of this entry means, let us quickly re-visit the definition of testing: "Testing is the process of executing a program with the intent of finding errors".

Testers can sometimes find themselves getting entangled in the definitions of the terms "success" and "failure" in relation to testing. The question is, "When is testing considered to be successful ?" To add to their woes, many stakeholders tend to refer to a successful test campaign as one wherein the tests have passed without finding any issues. Going by our above definition, testing is performed with the intent to unearth issues. Which goes to say that success in testing is when a test fails rather than when it passes. A test that has failed and thereby found an issue has actually succeeded.

As a basic analogy, let us assume that you take your car that rattles and probably leaks some oil to a mechanic for inspection. The mechanic runs a battery of tests which do not find any issues with your car and based on the results of these tests your car is certified to be in perfect condition. Would you call the result of such a testing as successful ? In this case you still seem to have the problem plus you have incurred expenses towards testing the vehicle which turned out to be unsuccessful. If the tests had found an issue, you would naturally be inclined to consider your investment in testing to be worthwhile.
***
Liked this entry? Join my community of professional testers to receive fresh updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Your email id will remain confidential. Subscriptions are handled by Google's FeedBurner service.

QA, QC and Testing ...

In an earlier blog entry that talked about QA (Quality Assurance) and QC (Quality Control), I said that testing was a QC activity. Recently I received a message that asked, "If QC is the same as testing: In what way, by means of testing, are you controlling the quality?"

Software Testing is one of the QC techniques. Other QC techniques include,  inspections, reviews, walk-throughs of work products like requirements, designs, code and documentation.

QA aims to assure that quality work and quality deliverables will be built in before work is completed. QA focuses on the ability of a process to produce or deliver a quality product or service. The intent of QC is to determine that quality work was done after the work has completed. For software the QC function may involve checking the software against a set of requirements and verifying that the software meets the defined requirements. QC examines the results of a process to determine the degree to which it conforms to expectations. The "control" in QC involves detecting problems with a product, or catching "poor quality" before shipping to customers. Looking at it another way, when QC finds instances of "poor quality", it implies that the group has spent resources and time to produce a product that has poor quality built in.

QC includes all tactical activities necessary to produce a quality product or service, while QA looks at quality from a strategic perspective. QC focuses on identifying problems after they occur. QA is focussed on preventing problems from occurring. Inputs from QC may feed into the QA process. For example when QC finds recurring issues in any area, QA can look at improving processes involved in producing that functionality or feature to minimize occurrence of similar issues in that area going forward.

Regarding the debate on whether it is appropriate to call testing as either QA or QC, i tend to agree with Michael Bolton's view about testing & testers: "We don’t own quality; we’re helping the people who are responsible for quality and the things that influence it. “Quality assistance; that’s what we do.
***
Liked this entry? Join my community of professional testers to receive fresh updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Your email id will remain confidential. Subscriptions are handled by Google's FeedBurner service.

Software Testing & Boredom at work

[A copy of this article is available freely for download here.]

The motivation for this blog entry came from a tester who recently told me that he was bored and his job seemed monotonous.

Before jumping right in at trying to suggest possible solutions, lets digress a bit to take a closer look at the concept of boredom ! (as if it weren't boring enough to talk about)

According to psychoanalyst Otto Fenichel, boredom occurs, "When we must not do what we want to do, or must do what we do not want to do." Though the feelings of being bored by routine tasks are often transitory, longer-term boredom can set in from a lack of meaning or purpose in life.

Most people blame boredom on the circumstances, but psychologists say this emotion is highly subjective and rooted in aspects of consciousness and that levels of boredom vary among people. Some individuals are less and others considerably more likely to be bored than others. Boredom is not a unified concept but may comprise several varieties, including the transient type that occurs while waiting in line and so-called existential boredom that accompanies a profound dissatisfaction with life. Boredom is linked to both emotional factors and personality traits. A person may feel bored when the individual
  • perceives that there is little value in doing the job
  • feels that what is being done is not challenging
  • feels there is not much to contribute
These are not the only reasons for boredom. Boredom could occur when a person feels that their skills or talents are not being used, their efforts are not valued or what they do is of little or no value. Sometimes people lose motivation, are closed to new ideas or only consider things that fit into their “comfort zone”. Some people have problems with attention that also play a role, and thus techniques that improve a person's ability to focus may diminish boredom.

Boredom can be a motivator too : It may be telling you that it is time to wake up and make some changes to what you are doing.

Can software testing ever be boring?

As someone who claims to be a software testing professional ! I am tempted to say “never”. Testing is fun and challenging. However, the fact is that testing can at times seem tedious and monotonous. Good software testers must be able and willing to accept a certain degree of repetitive activity as part of their testing job. Monotony becomes a problem when it becomes a regular part of the testers job. There may be times when your job requires performing tasks that may seem boring. For example, being asked to execute a set of tests manually on the same version of a product across multiple platforms to check for compatibility. When boredom sets in, the normal human tendency is to try to short-circuit the testing activity. For example, it may be in terms of executing fewer tests than required, not paying close attention to the test results, assuming that a test that has passed in the previous run will pass now, not being open to any new issues that may be lurking around, etc; all of which are detrimental to the quality of the product being tested.

A generally suggested solution to overcome boredom in manual test execution is to automate the tests. My advice is to not jump in immediately to automate tests. Manual testing has a lot of value. Of course, on the face of it test automation does make sense but there are considerations to be made before embarking on such an exercise. Test automation must be approached with the same rigor and discipline as your organization would approach a software development project for its customers.

Also, having your tests automated does not mean that testers will no longer face tedium or boredom. Ask around with those testers who need to code and regularly maintain test automation for feedback and you would realize that some element of fatigue and boredom will creep in even with automated testing. 

So, as testers do we resign ourselves to the fact that some part of a testers job will always be boring? Is there a choice? What can you do if you're frequently bored?

"Don't blame your job, the traffic or your mindless chores," says Anna Gosline in a December 2007 article at Scientific American. Instead, look to yourself for options you may have to relieve boredom. Find a way to inject variety and stimulation into routine tasks.

We operate at our best when we are utilizing our strengths. Look for creative ways to alter your tasks or the way you approach your work to utilize your strengths.

If you feel that you do not have enough to do, talk with your manager and ask for more work, including more challenging responsibilities. A positive conversation with your manager could lead to a altered job description which may alleviate any feeling of boredom you might have.

If you are part of a larger testing team, check if you can exchange some tasks with your colleagues. That way both of you get to work on something different.

Constantly ask yourself as to how you can add more value to what you are doing. Adding value will result in at least two things – one, you will definitely feel much better and satisfied and two, the organization will view you as being enthusiastic, interested and pro-active. All very good qualities to exhibit which may in turn lead to better responsibilities going forward.

The power of your thoughts cannot be emphasized enough. Begin to think differently about your work. Realize how your thoughts drive your actions.

When working continuously, take a break. It often helps to clear your mind and relieve some stress.

Be the employee who takes active interest in not just individual performance but also keeps the larger interest of the group or organization in mind. Look for opportunities to suggest changes and try something different.

It is a good idea to find a mentor. Someone who can coach and guide you will prove very useful. Do not think that coaches and mentors are just for the junior employees. Even CEOs need coaches. Everyone at any level in the organization can benefit from a good mentoring relationship.

Of course, a list of tips to alleviate boredom at work should have this one – learn more about your product and area of work. Do you already know your product well ? If yes, then set a goal to be the master of your product; strive to learn all that you possibly can. If no, start learning more. Being in software testing does not put you at a disadvantage in mastering your product. As a matter of fact, you have a head start at mastering your area since you would have a greater breadth of knowledge of most areas of the product which your development counterparts may not. Strive to become the “go-to” person of your group, the one everyone turns to when they need information about the product. I have been and seen such an expert many times and the satisfaction you get is tremendous.

Finally, always aim to improvise. "Restructure the job in your own mind," says renowned psychology professor Csikszentmihalyi. "Approach it with the discipline of an Olympic athlete. Develop strategies for doing it as fast and as elegantly as you can. Constantly strive to improve performance - doing it in the fewest moves, with the least effort, and with the least time between moves."
***
Liked this entry? Join my community of professional testers to receive fresh updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Your email id will remain confidential. Subscriptions are handled by Google's FeedBurner service.

Share & Bookmark

Software Testing, where conflict is normal

In our daily lives, conflict is generally viewed as undesirable. This could be true in our personal and professional relationships. However, while producing software, conflict and problem solving are key elements involved in enabling delivery of quality products. Organizations on their part must encourage constructive conflict while keeping in place a structure to manage conflict.

Lets face it, if you are the type who prefers a stress-free, non-confrontational role, then software testing is not for you. Software testing is not just about having the requisite technical competencies and analytical skills to perform testing. Software Testers need a bunch of soft-skills and a form of mental make-up that can enable them to survive and thrive amidst conflict. If you like being out in the front, dealing with conflict, are not worried about how folks will react to the information you convey, then you could be on the way to being a software tester. Software Testers must not shy away from taking up an adversarial position … when required.

Software Testers report problems. Testers are the bearers of "bad news". The recipients of the news may react in a myriad different ways which could be ego-deflating, sarcastic or plain rude.  Software Testers need to walk a fine line between being overly jingoistic about the issues they have observed / being judgmental or going soft / worrying if they should invite conflict by even relaying information about the issue.

For new software testers, their initiation into the process of finding a defect and reporting it can be an experience to remember. Over time, as testers discover more defects of differing severities, they become more confident about their own abilities while building a rapport with developers. Instead of viewing the inherent conflict as confrontational, testers begin to engage in meaningful discussion about problems identified and addressing them. All software testers will experience some form of push back from their development counter parts. This is not bad and can actually be very healthy. When either side (software testing or software development) simply agrees with what the other says without debating and clarifying the issue, it could be a sign that something's amiss. A certain degree of healthy conflict and debate will help in thorough analysis of issues being reported and development of better quality solutions.
***
Liked this entry? Join my community of professional testers to receive fresh updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Your email id will remain confidential. Subscriptions are handled by Google's FeedBurner service.

Software Testers focus on Software Users

Knowing what the user wants is important in producing software.
Also, knowing how the user would use the software is important to producing quality software.

Software testers should focus on both: understanding user requirements as well as how an user might interact with and use the software. This understanding helps testers test scenarios that are closer to reality.

It is usually straight-forward to come up with the positive tests. Even developers can do it to a large extent ! When developers develop software they have an expectation of how users "should" use and interact with the product which is related to how they have designed the system.

Smart testers design their tests, especially their negative & error tests keeping in mind how users would behave. In the real world, users will make mistakes as part of the learning process, not read the complete documentation or manuals for your product, interact with the product in ways the developers do not expect, provide inputs that may not be the values that your system expects and do various other things that could show up chinks in your product's armor. In fact, we all make "mistakes" as users of different software products. As we try to familiarize ourselves or explore the product features, we end up doing things that the developers of the software may not have envisaged. Testers need to incorporate testing for errors, "nonsensical" actions, invalid inputs, etc. to try and mimic real world actions by users.

This user focus by testers, translates into how defects are reported. Testers assign a severity value to each defect which reflects the tester's estimation of impact of the particular defect on the user. Severity also factors in the likelihood / frequency of users facing the issue. To focus on users, testers need to be encouraged to think independently and not just go with what developers think testers must test. Developers come with a perspective of how the system is designed and their baggage of expectations on how the software should be used. Getting testers exposed to customers/users or customer facing groups could help them to approach testing with the user perspective in mind.

Software Testers as Generalists

A generalist may often seem to be someone who is like the proverbial "Jack of all trades but master of none". In producing software, one of the significant differences between the software testing and software development functions is the presence or absence of generalists. Typically, a software developer is a specialist. A software developer is expected to specialize in a specific area. The emphasis in software development is on the depth of knowledge acquired. There is little scope for a generalist here unless you move up the food chain and occupy a senior managerial position. In contrast to this, the software testing function values generalists. The emphasis on generalists in software testing is on their breadth of knowledge acquired rather than depth alone.

A generalist software tester is able to test and comment on a product or feature without needing to know about its internal workings. Generalist software testers are often required to quickly come to speed on a new product or feature and test from an end user perspective. This requires them to gain a broad understanding of the various aspects of the product in a short time. These software testers bring in a different perspective in comparison to the software developers.

On the face of it, this emphasis on breadth vs depth of knowledge may cause generalist software testers to be viewed as "ignorant". However, it is this very "ignorance" that helps these software testers examine the application under test (AUT) the way an user would without being too familiar with the internal workings or the technological underpinnings. Generalist software testers however do need to be familiar and well aware of the customers usage, their domain and environment. This domain knowledge coupled with a broad based understanding of the product helps generalist software testers add significant value to the organization.

Software Testing and Software Development

Two significant functions involved in producing software; different like chalk and cheese, yet co-dependent and (must) work together to produce a quality deliverable.

Speaking of dependence, both functions are very much dependent on one another. On close examination, Software testing may seem to have a greater degree of dependence on Software development. From needing relevant documentation from development to kick start test planning and tests development, incorporating testability in software development, providing builds to test, providing timely fixes to issues, the software testing function is very much dependent on software development. When testers hit test stopper issues, testing is halted until the stopper issues are addressed. Software development too is very much dependent on the software testing function. Testing provides critical and valuable information to development (and stakeholders) on the software being developed. Software testing by its very nature is a support function, providing services to the overall organization. A valuable output from testing is the information about the software being produced which enables informed decision making.

It is not uncommon to see testers face the challenge of time crunch or time compression more than developers. This usually occurs because software development normally happens prior to formal and extensive testing.  Even when testing is distributed across phases of the software development life cycle, generally the more rigorous and formal testing activities / phase is slated to occur post feature complete, i.e. once development has finished implementing the planned features. When software development tends to slip the estimated development schedule, what usually happens in projects where the release date is fixed is that the time available for downstream activities such as software testing tends to get cut back. This tends to support the view that developers have more flexibility (time-wise) when compared to testers. The other aspect of this is that testers need to have better contingency planning to handle changes to schedules and staffing requirements.

The general view supports the notion of software development being a constructive function leading to development of a product or feature, while software testing is often viewed as a destructive function that attempts in various ways to break what has been developed. However, these opposing functions and view points are necessary to deliver a quality product similar to the Yin & Yang principle of Chinese philosophy.
***
Liked this entry? Join my community of professional testers to receive fresh updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Your email id will remain confidential. Subscriptions are handled by Google's FeedBurner service.

Time-boxed Software Testing

A primary problem faced by testers relates to time available to do testing. There is never enough time to do all the testing that may be possible to perform. Testing always involves a trade-off, making choices on what and how much to test within the constraints of available time. Look at it this way - if we were to measure test coverage as a percentage of all the tests that may be potentially performed, the test coverage would always be zero. This is due to the fact that for a significant real-world system, the number of tests that may be run are infinite.

Testers need to understand requirements, assess risks and prioritize tests to be performed. It is essential that testers involve stakeholders while making decisions on what to, what not to and how much to test. Time-boxing of testing is not restricted to any particular phase of activity. Time constraints are set from the start and tend to escalate as the project moves towards completion. It is not uncommon for testers to be pressurized to finish up testing quickly or shorten test cycles as the release date gets closer. It is important for testers to be able to clearly communicate the risks involved when deciding on dropping any tests to accommodate requests for shorter than normal cycle times. Stakeholders can use the information provided by testers to understand the risk-return trade-offs and decide on the release.

New Manager - tip

This little tip is intended specially for the folks who recently moved into a Manager role from an individual contributor or a technical engineer position. You now have people reporting in to you. So, how do you deal with your direct reports ?

Let us take a step back and look at the reason for your promotion to the Manager role. Why were you promoted ? In most cases, your stellar performance as an individual contributor or technical resource played a significant part in your elevation. And, your performance as a technical resource either as a developer or a tester would have occurred mostly by dealing with your tasks in an "object oriented" manner. 

What I mean by “object oriented” in relation to tasks is treating different tasks / work entities as objects or modules that have a defined interface for interactions and for the most part exhibiting a black-box attribute that hides their internal complexities. By organizing your work in such a way that you abstract out and deal with the various tasks as distinct black-boxes with a specific public interface to deal with, you make your tasks simpler and easier to handle. Handling complex code or applications as a set of modules interacting with each other via interfaces promotes efficiency and keeps things demarcated clearly. This works well when working with Software development or testing. 

However, when a similar concept is applied to people, things take a different turn. Coming from an individual contributor position, it is easy to apply the principles that worked well earlier to the new role. People however, cannot be viewed as distinct black-boxes; their work life cannot be seen in isolation from what happens outside of work; just by having people assigned responsibilities does not mean that they will be able to handle them; each person needs to be handled uniquely and deserves exclusive focus and attention; to perform well as a manager, it is essential to realize a basic precept of managing - people cannot be modularized or commoditized.

Test Case, Test Condition, Test Procedure, Test Control, Test Execution

As testers, we come across various terms related to our line of work. Here are a bunch of terms prefixed with the word "Test" that we seem to encounter fairly often. Let us quickly look at what these terms mean.

A Test Condition is an aspect or attribute of a system or component that can be verified using test cases. Example: a feature, GUI attributes, function, etc.

A Test Case is a set of input values, pre-conditions, expected results and post-conditions that are created to verify a particular test condition or objective. Example: to verify a specific requirement.

A Test Procedure is a document that specifies the sequence of actions involved in executing a test. The test procedure is also called as test script.

Test Control refers to a management task involving coming up with and application of corrective actions to bring a test project on track when deviations from plan have been observed as part of a monitoring exercise.

Test Execution, as the name suggests - is the process of running a test on the application or system under test. The output of test execution is the actual results which are compared with the expected results that are listed in the test case being executed.