Review: The Definitive Guide to Building Code Quality, Don Jones

This book is focused on development using Microsoft .NET and Visual Studio. For those who work with these technologies it will be easy to relate to & follow most of the content. However, the book has some general practices around software development that are useful to consider irrespective of the tools you use. The book begins with an overview of development using .NET and Visual Studio. It includes a look at the basics of .NET and the Visual Studio environment. Subsequent chapters delve into practices for use in development for better code quality. These include, coding standards and best practices, techniques such as code analysis and peer reviews, addressing coding errors, a look at performance and security issues, an overview of software testing and wrapping up with a look at automated debugging, code analysis and testing.

This isn't one of those jargon-filled tomes that cannot be read from start to end without investing a lot of time and effort. The book is relatively short and easy to read through. Coming from a non-MS world and having worked on a contrasting set of technologies, it was still useful for me to go through the chapters and pick up some useful insights on development practices. Being a testing professional myself, i did have some disagreement with the way testing was presented. It is hard to please a tester ! Chapter 6 is devoted to testing and it appears to me to be a fairly simplistic view of the testing function that can probably act just as a very basic primer into testing. While the general comfort level with the content of this chapter is not too high, some points stand out and i thought i'd mention them here.

Chapter 6 includes a section on Code Coverage analysis and tends to imply that performing code coverage analysis will answer the question "Did we test everything?". This i would say is incorrect and an improper way to judge test coverage. From a business perspective, a code coverage number or percentage can be meaningless. What would be useful is to verify that of all the requirements - functional and non-functional including usability have been covered. While code coverage measurements have some value, excessive focus on using this to measure testing is fraught with dangers. You get what you measure and reward. It does not take much for the testing organization to orient testing and tests development to achieve higher coverage numbers without adding much value. The focus is also on verifying what the developer has done and not about validating if what is being developed is what is needed by the customer/user. There's a lot of literature on code coverage and testing and so i will not get into that here. Suffice it to say that code coverage should be one of the metrics that you use and use it after understanding it. However, it should not be the one / sole metric to answer if you have "tested everything".

Also, in the same chapter there is a section on "testing phases and efforts" that states that, "Unit testing is commonly performed by developers on an ad-hoc basis as they code, and may include ad-hoc tests as well as tests from formally defined test cases". This paints a poor picture of unit testing and  seems to indicate that this is the norm and may be followed by others. Unit testing is an important activity that needs to be followed with due rigor and discipline. Ad hoc unit tests are a bane of development that runs contrary to the development practices that the book recommends. Also, few testing types are covered and that too in a simplistic manner. Some statements may not truly and fully reflect the exact picture about testing. However, my recommendation to readers who know about testing is to skip the chapter and for the newbies to read it to gain a general overview of testing but realize that you'll get better and more detailed views on testing elsewhere.

Wrapping up, most of the description of phases and activities in the book seems to coincide with the sequential lifecycle models. Today development organizations follow different models, including the popular agile models of development. There is a difference in how development and testing occur across the different models and the set of challenges they face. However, the book does offer practices that can serve as useful guidelines for developers and more so for developers using the Microsoft tools and technologies. The coverage of Microsoft tools seems extensive and is something that a developer coming from the MS world will appreciate.

The book is available for download at http://nexus.realtimepublishers.com/dgbcq.php?ref=grm

Ad hoc testing

Having looked at the various types of testing in an earlier post, we'll stop every once in a while to look at a brief summary of each of the different test types. Here we look at ad-hoc testing.

Ad-hoc Testing
  • does not make use of formal test case design techniques / methods
  • does not require test case documentation during execution
  • new tests are documented after they are executed so that they are covered as part of regular test efforts
  • test execution and results reports are generated before test case design / documentation
  • does not require any end-to-end testing of functionality
  • can cause test execution to jump across various functional areas / interfaces
  • also called monkey testing or random sampling testing
  • uses product / domain / platform expert knowledge and intuition to foster out-of-the-box type thinking
  • can bring in new perspectives

Usability Testing of Software

Let us begin by taking a look at the term, usability. According to the Usability Professionals' Association, "Usability is an approach to product development that incorporates direct user feedback throughout the development cycle in order to reduce costs and create products and tools that meet user needs."

Usability testing provides the opportunity to receive feedback from the very people the application is meant for. And, the consequences of building something without getting user feedback is obvious to anyone who's in the industry ! Whilst planning for Usability Testing, its easy to constrain it to be more of a "Validation" type technique. Usability testing & the information gathered from the exercise, should serve to make informed design & development decisions right from the outset, thereby acting in a mainly "preventative" role. The idea in the case of Usability testing is to test early & test often. Usability testing lets the design and development teams identify problems before they are deeply entrenched. The earlier those problems are found and fixed, the less expensive the fixes are. As the project progresses, it becomes more and more difficult and expensive to make major design changes. The more you test and change based on what you learn, the more confident you can be that your application will meet your objectives and your users' needs when it is released.

An iterative process involving developing prototypes, testing it with users, analyzing the results, making needed changes based on the results and repeating the test, analysis and revision cycle is the recommended way to produce applications that are more usable (and acceptable) to users. We could also probably say that, in the initial stages of application development, users would be called to perform tests that are mainly "Exploratory" in nature. This feedback helps clarify direction for interface design, navigation, etc. In later stages, prior to release, "Validation" type usability tests are performed to validate that interfaces and design are "usable" and feedback from earlier stages have been incorporated.

The folks who should actually be doing Usability Testing (executing tests) should ideally not be anyone associated with the product or organization. A profile of potential subjects (folks who will perform Usability Tests) should be prepared that mimics end user attributes in as fairly and representative manner as is feasible. Based on this profile, subjects may be sourced from market research, temp or contracting agencies.

During Usability testing, representative users try to find information or use functionality on the web site / application, while observers watch, listen, and take notes. The purpose of a usability test is to identify areas where users struggle with the application and make recommendations for improvement.  The most likely goals in usability Testing or areas that are monitored and measured include, the Time (taken to accomplish specific scenarios or tasks), degree of Accuracy (covers inaccurate menu or navigation choices, errors and lack of clarity or misunderstandings), the Success (in accomplishing the set tasks wherein users are able to complete the scenario they were asked to perform using expected steps) and importantly, Satisfaction of the users (broken down and measured per area such as navigation, information search, etc.)

If you are developing an application, your product must allow users to do their tasks at least as quickly with as few errors and as much success and satisfaction as their current way of working. Ideally, it should let them be more quicker, more accurate, more successful, and more satisfied. Otherwise, there's little chance of customer delight.

Different types of testing

How many different types of testing are you aware of ?

  • Acceptance Testing
  • Ad hoc Testing
    • Buddy Testing
    • Paired Testing
    • Exploratory Testing
    • Iterative / Spiral model Testing
    • Agile / Extreme Testing
  • Aesthetics Testing
  • Alpha Testing
  • Automated Testing
  • Beta Testing
  • Black Box Testing
  • Boundary Testing
  • Comparison Testing
  • Compatibility Testing
  • Conformance Testing
  • Consistency Testing (Heuristic)
  • Deployment Testing
  • Documentation Testing
  • Domain Testing
  • Download Testing
  • EC Analysis Testing
  • End-to-End Testing
  • Fault-Injection Testing
  • Functional Testing
  • Fuzz Testing
  • Gray Box Testing
  • Guerilla Testing
  • Install & Configuration Testing
  • Integration Testing
    •  System Integration
    •  Top-down Integration
    •  Bottom-up Integration
    •  Bi-directional Integration
  • Interface Testing
  • Internationalization Testing
  • Interoperability Testing
  • Lifecycle Testing
  • Load Testing
  • Localization Testing
  • Logic Testing
  • Manual Testing
  • Menu Walk-through Testing
  • Performance Testing
  • Pilot Testing
  • Positive & Negative Testing
  • Protocol Testing
  • Recovery Testing
  • Regression Testing
  • Reliability Testing
  • Requirements Testing
  • Risk-based Testing
  • Sanity Testing
  • Scalability Testing
  • Scenario Testing
  • Scripted Testing
  • Security Testing
  • SME Testing
  • Smoke Testing
  • Soak Testing
  • Specification Testing
  • Standards / Compliance Testing
    • 508 accessibility guidelines
    • SOX
    • FDA / Patriot Act
    • Other standards requiring compliance
  • State Testing
  • Stress Testing
  • System Testing
  • Testability Testing
  • Unit Testing
  • Upgrade & Migration Testing
  • Usability Testing
  • White box Testing
    • Static Testing Techniques
      • Desk checking
      • Code walk-through
      • Code reviews and inspection
    • Structural Testing Techniques
      • Unit Testing
      • Code Coverage Testing
      • Statement
      • Path
      • Function
      • Condition
      • Complexity Testing / Cyclomatic complexity
      • Mutation Testing
We shall look at some of the above types in subsequent posts.

Software Testing: Test the tests

Tests are designed to test development work products. A lot depends on these tests and their evaluation of the software system or components being tested. However, like the software work products, tests too are an output of human endeavor and humans are prone to making errors. Since, the entire organization relies on the tests that testers develop, it is important that these tests be first tested well.

All test artifacts, such as the test cases, automation suites, etc, should be tested extensively. Test the tests first so they can then test effectively. Examples of testing tests include, reviews by peer testers as well as development counterparts, execution with controlled inputs, verifying test steps and validating output. Similarly test automation suites need to be treated with the same care as a regular development project and be subject to the similar software development standards and practices. These suites should be subject to similar levels of testing and QA like the regular software products that the organization produces.

For testware to inspire confidence in their ability to suitably test software, they must first be tested.

Why do we test ? What is the purpose of software testing ?

To answer the above question(s), let us look at the nature of software testing. The software testing group is a service provider. Software testers provide valuable information and insights into the state of the system. This information contributes towards reducing the ambiguity about the system. For example, when deciding whether to release a product, the decision makers would need to know the state of the product including aspects such as the conformance of the product to requirements, the usability of the product, any known risks, the product's compliance to any applicable regulations, etc. Software testing enables making objective assessments regarding the degree of conformance of the system to stated requirements and specifications. 

Testing verifies that the system meets the different requirements including, functional, performance, reliability, security, usability and so on. This verification is done to ensure that we are building the system right. In addition, testing validates that the system being developed is what the user needs. In essence, validation is performed to ensure that we are building the right system. Apart from helping make decisions, the information from software testing helps with risk management.

Software testing contributes to improving the quality of the product. You would notice that we have not mentioned anything about defects/bugs up until now. While finding defects / bugs is one of the purposes of software testing, it is not the sole purpose. It is important for software testing to verify and validate that the product meets the stated requirements / specifications. Quality improvements help the organization to reduce post release costs of support and service, while generating customer good will that could translate into greater revenue opportunities. Also, in situations where products need to ensure compliance with regulatory requirements, software testing can safeguard the organization from legal liabilities by verifying compliance.

This is the second part of the series on Software Testing, from the ground up. The previous post looked at the question -  What is Software Testing ?

What is Software Testing ?

While embarking on the journey to learn Software Testing, the starting point is understanding its meaning. So, what is Software Testing ?

The IEEE definition states that testing is, "an activity in which a system or component is executed under specified conditions, the results are observed or recorded, and an evaluation is made of some aspect of the system or component". According to Hetzel, "Software Testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results". Further, Myers states, "Software Testing is the process of executing a program or system with the intent of finding errors".

Software Testing involves an evaluation of the system or component. This evaluation is done to: a) see if the system does not perform what it is supposed to perform b) to see if it performs what it is not supposed to be performing

Given the above definitions, a question that gets asked is - What does testing prove ? Does testing prove the absence of defects ? The answer: Testing does not prove anything. Testing is not about proving the absence of defects. Testing helps reduce the risk perception associated with the system or component. Testing is about finding defects but not about finding all of the defects. This distinction is important to make.

Software Testing involves operating a system or component under controlled conditions, evaluating the results and checking if performance meets expectations. Testing is usually compared to the headlights of a car while driving on an unknown stretch in the dark. Testing brings to light critical information that enables better decision making. In the words of James Bach, testing is "organized skepticism", an inherent belief that things may not be as they seem; testing helps to reduce uncertainty about the state of the system or component and is usually thought of as the process of comparing the ambiguous to the invisible, so as to avoid the unthinkable happening to the unknown. On a lighter note, testing is considered to be a support function that enables developers to look good by finding their "mistakes" before anyone else does.

This is the part 1 of the series on Software Testing, from the ground up. Stay tuned for upcoming posts in this series.

Coming up next .. Software Testing from the ground up

Coming up next, is a series of posts that explain, examine and delve into the subject of Software Testing, from the ground up. The series starts from the very basics, by explaining what is testing and moves on gradually to cover more advanced areas. 
The upcoming series is mainly targeted at the following groups.
  1. New Testers - provides a good opportunity to learn Software Testing starting from the basics 
  2. Non-Testers and professionals from other functions - gain valuable insights into the art and science of the Software Testing function, clarify any doubts and shatter any myths around the discipline
  3. Experienced Testing professionals and practitioners - an useful refresher and an opportunity to enhance skills around Software Testing where needed

Why do users find bugs that Software Testers miss ?

Here's a familiar scenario: Testers spend months or longer, testing a product. Once the product is released, users report bugs that were not found by the testing team. The obvious question that gets asked is - how did the testing team miss these issues ?

Listed below are some of the common reasons why users catch issues that the software testing team may have missed.
  • The testing team has not tested in an environment that is similar to what the user uses. This could happen for a variety of factors. It could be due to a lack of awareness of the user environment or usage scenario. Where there is awareness of the user environment, it may be that the testing team did not have the time or bandwidth available to test with this scenario. Where there is awareness and time, it may be that the testing team could not replicate the scenario due to physical or logistical constraints such as availability of required hardware, software, peripherals, etc. While it is not possible to replicate every possible usage scenario, testing must consider the most possible or widely used cases.
  • The steps that users followed differed from what the testing team followed. This can happen either when users follow a different set of steps than what the testing team may have followed or when the order of the steps followed differs. Even when the same set of steps are followed, if the ordering of the steps differ, it can have different consequences.
  • The user entered a set of input data that was not covered during testing. This can occur for the simple reason that it is physically not possible to test every possible set of input data. When a product is deployed widely, the chances that some users somewhere will enter a set of values that was untested is likely. While designing tests, testers choose sets of input values to test with. Errors in making these choices can also contribute to user reported defects.
  • The defect that users reported could come from code that was not tested. It could either be due to having released untested code or the existing set of tests did not exercise the piece of code where users found defects. The challenges encountered by the software testing team increases as our products become more complex.

IEEE Std 829 Software Test Plan (IEEE Standard for Software and System Test Documentation)

This article will examine the Test Plan in detail. We shall refer mainly to the IEEE Std 829-2008 (IEEE Standard for Software and System Test Documentation) in this article.

1. Test Plan definition

The IEEE Std 829 defines a Test Plan as, "(A) A document describing the scope, approach, resources, and schedule of intended test activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. (B) A document that describes the technical and management approach to be followed for testing a system or component. Typical contents identify the items to be tested, tasks to be performed, responsibilities, schedules, and required resources for the testing activity. (adopted from IEEE Std 610.12-1990 [B2])"

2. Why is a Test Plan required ?

The Test Plan is the fundamental document for testing. The purpose of the Master Test Plan, as stated by the IEEE Std 829 is to provide an overall test planning and test management document for multiple levels of test (either within one project or across multiple projects).

The Test Plan is a medium of communication, used by the testing team to communicate to the other project participants about the team's intent, expectations and understanding of the test activity to be performed. It is important that all the stakeholders of the project review and sign-off on the test plan. The Test Plan helps the testing team to set the right expectations about the intended test activities. It makes for sound business sense to have the review and sign-off happen before starting the test campaign. This will help in avoiding any mis-understanding around the testing effort as well as protect the testing team from potential complaints about the testing performed later in the project.

The Test Plan is a product of the test planning process. The Test Plan may be referred to both as a map and a blueprint for testing. It is a comprehensive document that offers clarity to all the project's stakeholders about testing. It addresses the how, what, when, who, where and why of testing and helps the testing group to focus.

3. What constitutes a Test Plan ?

The IEEE Std 829 provides an outline of the Master Test Plan (MTP). MTP involves selecting the constituent parts of the project's test effort, setting the objectives for each part, setting the division of labor in terms of time and resources, and the interrelationships between the parts, identifying the risks, assumptions, and standards of workmanship to be considered and accounted for by the parts, defining the test effort's controls and confirming the applicable objectives set by quality assurance planning. It also identifies the number of levels of test, the overall tasks to be performed and the documentation requirements.

Master Test Plan outline

1. Introduction

This section identifies the document and describes the entire test effort, including the test organization, test schedule and the system characteristics (such as complexity, risk, safety level, security level, desired performance, reliability, and/or cost) selected as important to stakeholders, and arranged into discrete levels of performance or compliance, to help define the level of quality control to be applied in developing and/or delivering the software. A summary of required resources, responsibilities, tools and techniques may also be included here.

Identifier: This uniquely identifies the version of the document by including information such as the date of issue, author(s), approver signatures, status (e.g. draft, reviewed, corrected or final), reviewers and managers. This information may be placed either at the beginning or end of the document.

Scope: This describes the purpose, goals and scope of the test effort. Identify the project(s) for which this plan is written and the specific processes and products covered by the test effort. Describe the inclusions, exclusions, assumptions and limitations. Test tasks will reflect the overall test approach and the development methodology. For example, if the development is based on the waterfall methodology, then each level of testing will be executed once. However, if the development is based on an iterative methodology, then there will be multiple iterations of each level of testing. The test approach identifies what will be tested and in what order for the gamut of testing levels such as component, component integration, system and acceptance. The test approach identifies the rationale for testing or not testing as also the selected order of testing. The test approach may also identify the type of testing performed at the different levels listed earlier.

References: This lists all applicable reference documents, both internal and external.

System overview and key features: This describes the purpose and key features of the system or software product under test or reference where the information can be found.

Test overview: This describes the test organization, test schedule, system characteristics (such as complexity, risk, safety level, security level, desired performance, reliability, and/or cost) selected as important to stakeholders, test resources, responsibilities, tools, techniques and methods necessary to perform testing.

While describing the test organization, an organization chart may be included to clarify the reporting structure. Include information on the authority for resolving issues raised by testing, and the authority for approving test products and processes.

The test schedule describes the test activities within the project lifecycle and milestones. Summarize the overall schedule of testing tasks, identifying where the task results feed back to the development and related process such as quality assurance and configuration management. Also, describe the task iteration policy for re-execution of test tasks and any dependencies.

The test resources requirement should be summarized to include, staffing, facilities, tools and any special procedural requirements such as security, access rights, etc.

Include information on the responsibilities for testing tasks. Responsibilities may be primary (task owner / leader) or secondary (providing support) in relation to specific test related responsibilities.

Describe the hardware, software, test tools, techniques, methods and test environment to be used in testing. Include information pertaining to acquisition, training and support for each tool, technology and method. Include the metrics to be used by the testing effort.

2. Details

This section describes the test processes, test documentation and test reporting requirements.

Test processes and definition of test levels: Describe the test activities and tasks for all development lifecycle processes. List the number and sequence of levels of test. Levels of test may include component, integration, system, acceptance, security, usability, performance, stress, interoperability, regression, etc. Not all projects will have all the levels of test. Some projects may have fewer levels of test and could combine multiple levels. The test processes may either be described here or reference to already defined standards may be provided.

In addition, the IEEE Std 829 recommends that for each test activity, the following topics be addressed.
  • Test tasks: Identify the test tasks
  • Methods: Describe the methods and procedures for each test task, including tools. Also, define the criteria for evaluating the test task results
  • Inputs: Identify the required inputs for the test task. Specify the source of each input. Inputs may be derived from preceding tasks or activities
  • Outputs: Identify the required outputs from the test task
  • Schedule: Describe the schedule for the test tasks. Establish specific milestones for initiating and completing each task, for obtaining input and for delivery of output
  • Resources: Identify the resources for the performance of the test tasks. Example of resources include people, tools, equipment, facilities, budgets, etc.
  • Risks and Assumptions: Identify any risks and assumptions associated with the test tasks. Include recommendations to eliminate, reduce or mitigate risks identified
  • Roles and responsibilities: Identify for each task, who has the primary and secondary responsibilities for task execution and the nature of the roles they will play
Test documentation requirements: Here, define the purpose, format and content of all other testing documents that are to be used (in addition to those that are defined in the "Test reporting requirements" section)

Test administration requirements: These are needed to administer tests during execution and involve describing the following.
  • Anomaly resolution and reporting process: Describe the method of reporting and resolving anomalies. This would include information about the anomaly criticality levels, authority and time line for resolution.
  • Task iteration policy: Describe the criteria for repeating testing tasks when its input is changed or task procedure is changed. Example, re-executing tests after anomalies have been fixed.
  • Deviation policy: Describe the procedures and criteria for deviation from the MTP and test documentation. The information for deviations includes task identification, rationale and effect on product quality. Also, identify the authorities responsible for approving deviations.
  • Control procedures: Identify control procedures for test activities. These procedures describe how the system, software products and test results will be configured, protected and stored. They may also describe quality assurance, configuration management, data management, compliance with existing security provisions and how test results are to be protected from unauthorized alterations.
  • Standards, practices and conventions: Identify the standards, practices and conventions that govern the performance of testing tasks.
Test reporting requirements: This specifies the purpose, content, format, recipients and timing of all test reports. Test reporting includes test logs, anomaly reports, level interim test status reports, level test reports, master test report and any optional reports defined during preparation of the test plan.

3. General

This section includes the glossary of terms, acronyms, description of the frequency and process by which the master test plan is changed and base-lined and may also contain a change page mentioning the history of changes (date, reason for change and who initiated the change).

A result of the test planning process must be a clear and agreed upon definition of the product's quality and reliability goals in absolute terms. There must be no room for subjective interpretation. Everyone on the project team must know what the testing team intends to do and the quality goals. Test planning is not about filling in a template or writing a document. Test planning is an important activity that must involve testers and representatives from all functions part of the project team. Getting everyone on the same page and on agreement regarding what is to be tested, why it is to be tested and how it is to be tested is key to testing success.
***
Liked this entry? Join my community of professional testers to receive fresh updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Your email id will remain confidential. Subscriptions are handled by Google's FeedBurner service.

Fuzz Testing / Fuzzing as a Software Testing technique

Fuzzing or Fuzz testing is a method of testing applications by randomly altering or corrupting input data. The idea behind fuzz testing is to hit the application under test (AUT), with random corrupt (bad) data and observe how the system behaves. Fuzz testing can be done both manually and using automation. Automated fuzz tests tend to be more effective in unearthing issues based on the variety and range of data that may be supplied to the application. These automated fuzz tests may be used to send a regular barrage of garbage to the AUT. The emphasis on fuzz testing has increased due to the significance of security testing and tools available. Tests that are developed using reasoning and logic by human testers, would not usually find issues that a Fuzz test may reveal.

Fuzz testing is a simple technique but it can show important defects that need addressing. Corrupt data can cause applications to crash or behave unexpectedly. In the earlier days on some of the older operating systems, it was possible for application crashes due to corrupt data to bring down the computer system itself. The defects that are identified via fuzzing could be potential security holes if left unaddressed. The outline of steps to perform fuzz testing is listed below.

1. Gather the correct set of input data for your application
2. Change some or all parts of the input data with random or corrupt data
3. Pass this modified input data to your application and observe what happens

Fuzz testing may be performed manually to begin with, but for greater effectiveness, automated fuzzing is recommended. Fuzz testing requires a good deal of creative thinking. The steps listed above may seem simplistic, however once the initial defects are reported, developers will harden the application and introduce greater checks and verifications before accepting inputs. After this point, it can be harder (not impossible) to identify more defects. This is when testers need to exercise a greater degree of creativity to work-around these counter measures to break software. It is important to think like the hacker who will be looking to break the system.

Fuzz testing can quickly show up some of the "assumptions" that developers make. For example, when data is passed in as input, if a parameter is expected to accept a specific range of numbers, the program should be checking the input data to ensure that it matches with what is expected and not assume that the data is correct. Also, while working with files that have been created by the application under test, the application must verify the integrity and validity of the file before reading it again. Assuming that the files it created are valid is a potential security hole since an attacker can take advantage of this lapse and modify the file.

Fuzz testing is not a replacement for other more formal testing. When an application passes a fuzz test, it basically shows that the software can handle exceptions and incorrect inputs in a safe and sane manner. Fuzz testing is used to find defects and test an application's error-handling capabilities. Greater success with fuzz testing requires a detailed understanding of the application and related technologies being tested. If we are testing a protocol implementation or a specification, it helps to really know the protocol or specification. This knowledge can be used to come up with strategies to fuzz test in a way that can expose holes in the product. Fuzz testing demonstrates the existence of bugs and not the absence of it.

Software Testing - for whom do we test ?



Software Testers spend a significant part of their work lives, testing! How often do testers pause to think about the question - whom are we testing for ? The answer that managers would usually say is - stakeholders. That brings up the next set of questions - who is  a stakeholder and who are the stakeholders for testing ? A stakeholder is someone who has a stake or interest in the work that you do. The testing stakeholder also has an interest in the quality of the final work product. Stakeholders in testing may be broadly classified as internal and external stakeholders.

Some organizations view all stakeholders within the organization as "internal" while any stakeholders outside of the organization are considered as "external". In some other groups, anyone who is involved with the test effort - in terms of performing testing, leading or managing the test efforts are considered as "internal stakeholders" while all other stakeholders are considered to be "external". The list of who constitutes a stakeholder can vary across projects and organizations. Given below is a brief list of stakeholders for testing.
  • Other testers in the group
  • Test Managers and Test Leads, involved in managing and leading test efforts
  • Architects and Designers, responsible for designing the product
  • Implementors - the developers, development managers and leads involved with developing the product
  • Product marketing folks, involved in determining the features for the product and are interested in the quality of implementation of these features
  • Analysts, involved in determining the requirements for the product and the related quality attributes
  • Program/Project Management folks, responsible for project planning, organizing and managing resources for successful completion of project objectives
  • Customer Support folks, responsible for supporting customers and users of the product produced
  • Sales folks, responsible for selling the software
  • Executives / Senior Management of the organization, who run the organization
  • Shareholders of the organization
  • Users of the software
  • Partners of the organization
  • Vendors, who may supply components that are integrated with the product
  • Customers, who pay for the software
  • Community / Society, where the software product is being used
  • Governmental authorities, who are interested in the software complying with applicable laws and regulations
The above list is not complete and can vary depending on your particular project or organization.

Stakeholders generally tend to want the software testing efforts to succeed and ensure that a quality product be produced. However, some stakeholders can be neither positive or negative about your testing outcome. An example would be the government authorities who are normally only interested in your project or organization following the rules rather than whether you really completed a project successfully or not. As long as you do not violate any applicable laws and rules, they should be fine. Few stakeholders could even be interested in seeing the project fail and could be glad if testing showed up a lot of failures. These could be folks who are interested in maintaining the status quo and resistant to change that the project could bring.  It is important for the software testing group to identify who the testing stakeholders are, which of these stakeholders are supportive, who are neutral and who are not supportive.

Software Testing challenges in the Spiral Lifecycle model



Having looked at the Sequential and Iterative models in the previous posts, let us now look at testing in the Spiral Lifecycle model. The spiral model is generally used in large, complex and expensive projects. Development activity proceeds via a sequence of prototypes that are designed, prototyped and tested. Learnings from earlier versions of prototypes are used as inputs in subsequent prototyping efforts. Spiral model may be considered as an incremental & evolutionary prototyping model. Like the agile models, software is produced early in the life cycle.

Some of the common challenges that the software testing group will encounter with this model are listed here. Firstly, by its very nature, the spiral model is based on the premise of change being constant. Similar to the agile models but more so in the spiral model, changes to the design, technologies or tools used to produce the software can occur widely. The testing team needs to adopt flexible practices in every aspect of its functioning to be both open and adaptable to changes. Testing cannot assume that a particular tool, technique or system will be used throughout the lifecycle. Spiral models are suited in situations where there are a lot of unknowns and risks are being analyzed. In such cases, an early prototype may employ a particular set of technologies or architecture. On evaluation, the decision may be to discard the existing design in favor of a newer design or a radically changed version. Testing needs to be flexible and able to move in step with the changes in development.

Secondly, testing in the spiral model needs to be exploratory in nature, at least in the initial stages. The purpose of the early prototype is to understand risks and explore unknowns. Testing needs to be flexible as well as be able to go deeper to explore specific problem areas as needed. In subsequent stages testing goals may change to more formal & standard testing including regression and related test types. The testing group needs to be able to handle the requirements of non-formal and formal standard testing requirements at different stages of the lifecycle.

Finally, due to the very nature of the spiral model, involving repeated prototyping and dealing with unknowns, accurate test planning and estimating can be a challenge. Given the complexity and degree of experimenting involved, working with predictable time-lines is not usually possible.

Over the course of the past three posts, we have covered some of the common challenges faced in Software Testing with each of the three Lifecycle models - sequential, iterative and spiral.

Software Testing challenges in the Iterative lifecycle models



Iterative / Incremental lifecycle models, as the name suggests, focus on developing software in increments or chunks. Popular iterative models include the Agile methodologies such as Scrum, XP, etc. A common thread amongst the iterative models is that integrated and working systems are produced and delivered at the end of each iteration. Business functionality is divided into chunks. Each iteration delivers a chunk of functionality that is integrated and tested. From a testing perspective, the software testing group will get testable systems early in the lifecycle unlike a sequential approach. However, iterative and incremental models can pose their own set of challenges to the testing team.

The main challenge for testers in agile / iterative models is that the system is constantly changing. New code is being checked in regularly and testing cannot wait for or expect software that is in a final state. For the testing team, the software is a moving target. Software Testing teams need to adopt different approaches, tools and strategies to handle this level of change.

In iterative models, the riskier / more important functionality and features tend to be built in during earlier iterations / increments. Subsequent increments add more functionality on top of what was built in the previous increments. Given the amount of changes that can potentially happen in each increment, the risk of regressions is high. With each new increment, it becomes very important for the testing team to regression test all the features provided in the previous increments to make sure nothing is broken.

Another interesting facet of iterative development is that activities across increments can overlap. It is not usually the case that one increment completes entirely before the next one begins. For example, the testing team will generally be testing the current or previous increment's build in detail while the development team works on the next increment's features. Of course, while testers will also in most cases, be required to help with incremental testing of new features in the increment that the developers / programmers are working on, the primary responsibilities of the testing group is to thoroughly regression test the last increment's build plus the new features delivered then. This overlap can pose challenges in addressing bugs that the testing team identifies and time taken for fixing.

Software Testers also need to be aware that iterative models such as Agile are not really in favor of heavy weight processes or formal methods. There are extreme cases where the value of having testers is questioned. But again, that is the extreme. Most mature agile teams realize the immense value that testers can add in producing a quality product. Testers need to tailor their methods to suit the agile world.

Testing groups need to understand the fact that the organization does not really want "zero-defect" or "defect-free" software. While stakeholders will obviously not state this and will continue to ask for defect free software, the organization will not be ready to make investments necessary to reach that objective or even close to it. In reality, as we approach the objective of defect free software, costs keep increasing to a level (theoretically to flush out all the bugs in the system will cause costs to tend towards infinity) where organizations may not want to invest further. Finding the right balance between costs incurred and value derived is important.

In the next upcoming post let us look at testing in the Spiral Lifecycle model.

Software Testing in Sequential Lifecycle models (waterfall, v-model)



Sequential lifecycle models involve building the system in a sequence of steps or phases. Common sequential lifecycle models are the Waterfall model and the V model. These models have been around for a long time and are sometimes referred to as the traditional lifecycle models.

In this post, let us look at the nature of issues these models can pose to testing, by first examining some of the characteristics of these models.
  • The Software Testing phase in sequential models is slated to occur towards the end of the lifecycle
  • These models tend to be schedule and budget driven with delivery dates usually being fixed
  • For most projects it is hard to correctly make estimates across the different sequential phases for any sizable time frame
Software testing in sequential models are often subject to schedule compression. What is initially estimated may not be what is actually available or suffice for the testing team to complete its activities. When activities in any of the preceding phases of the chosen sequential model extend beyond estimates, it is often the case that towards the end of the project, the time available for testing gets cut back. Usually a trade-off between testing and delivery dates is necessitated. Often, the software testing group faces pressure to approve the release of the product by the delivery date even when the testing team has not received the time they originally estimated and planned for. However, when issues are reported from the field, the testing team is in the line of fire for not having caught all of these issues.

Schedule compression is not limited to the software testing team alone. Even product development teams face this issue. Schedule pressures can result in developers delivering poor quality artifacts to the testing team. Developer testing may be short-circuited too. The effect is that a considerable part of the testing time is spent on identifying, reporting and tracking defects that should not have been there in the first place. In such cases it may seem that the testing team is spending a considerable part of its time performing unit testing on the build that is delivered to the testing group rather than any higher level of testing which the team would have planned.

Sequential models require a fixed or near final set of requirements to be defined at the start of the project. Subsequent project phases including implementation and testing are based on the requirements defined at the outset of the project. Changes to the requirements will impact the testing group's plans. Any change will require the project to take a few steps back, incorporate the changes and be tested before moving ahead.

It is also the case that the testing team is not involved from the outset and testing is relegated towards its phase which is near the end of the software project lifecycle. There is little time for the testing group to be prepared. This will result in testing becoming purely reactive in nature with little scope for any preventive activities. The effectiveness and value of the testing team is reduced.

Effective test management can enable the group to handle the challenges mentioned above. In the next upcoming post we look at testing in the incremental and iterative models.

Software Testing in the Product Lifecycle

In this and subsequent posts, we look at Software Testing and how it fits in with the various Software Lifecycle models.

Software Testing is an integral part of the Software Lifecycle, irrespective of the type of lifecycle followed – sequential, iterative, spiral or incremental. How testing aligns itself will vary depending on the type of lifecycle selected. For example, in the case of a sequential model such as the waterfall or v model wherein the assumption is that the requirements are firmed up at the start of the lifecycle and  changes managed, the test team can adopt a requirements-based test strategy. With such a strategy, the test team is aligned at the outset in the lifecycle and begins test planning and development early.  Such alignment can enable early detection of issues / clarification of the requirements facilitating testing to play a preventive role. 

In the case of iterative / incremental lifecycle models such as agile methodologies, the test team receives requirements in increments at the start of a sprint or iteration. Here, the testing team can adopt a risk based test strategy to identify and prioritize risk areas. Test design & development happens just before execution. Here too, defect detection begins early and proceeds in short iterations through the project. However, the scope of testing to play a preventive role is reduced in such a lifecycle model. Test activities tend to occur in parallel with other activities  in the software lifecycle.

In upcoming posts, we look at each of the lifecycle models and how testing fits in.

Top 10 qualities of a Software Testing professional



Presenting the Top 10 qualities required of a Software Testing professional a.k.a. Software Tester.

1. Curiosity

Software Testers need to like exploring and discovering. Software Testers should be curious about everything and display keenness in understanding the why and how of products, their interactions, dependencies and their ecosystem in totality. Testers are required to venture beyond the realms of the tried and known, to discover what lies beyond. Installing new software builds, experimenting, seeking to better understand the software and break it - should come naturally to a tester.

2. Detail oriented and Thorough

Software Testing requires discipline and systematic approach. It is important for testers to be able to pay attention to details and be thorough. While testers should want to explore and experiment, they must also be sure to not leave any gaps in test coverage. Ensuring that all requirements and areas are thoroughly tested is important. Having an eye for detail in testing would also mean looking out for oddities and incorrect behaviors in the application being tested. What might seem like a small, insignificant or even irregular occurrence may be the harbinger of much larger issues. It pays to scrutinize each issue thoroughly.

3. Trouble-shooter

Testers should be good at helping root-cause issues. Being good at finding out why something does not work is a useful tester attribute to possess. Testers should be able to narrow down the conditions that cause an issue and help identify or at least suggest causes for issues observed. A detailed bug report that lists the issue, narrowed-down steps to reproduce, as well as probable cause along with relevant details can help developers address issues faster and better. Also, being able to find out why something does not work can point to more issues lurking around or areas that may need more testing. A tester's job is not just about executing a standard set of tests and reporting any failures.

4. Perseverance

Testers must keep at testing, exploring and trying to unearth issues. Bugs may appear intermittently or under certain conditions. Testers should not ignore or give up, but instead try different scenarios to reproduce the issue. Software Testers must also realize that all products have bugs. If a product looks to be free of bugs, it just needs more testing to find issues that current tests haven't looked at. Testers should always be in the pursuit of bugs and view every defect found by a customer as a slip or gap in their tests which must be addressed immediately.



5. Creativity

Software Testing is an art. It is not enough to test for the obvious. Finding bugs requires creativity and out-of-the-box thinking in testing. Software Testing must be amongst the most creative of professions. Lets make a fairly simplistic comparison between testing software and software development - which is considered to be a creative endeavor, which discipline needs more creativity ? Is it to introduce defects or find defects ? While the example is a bit crude, the idea is that it is harder to find defects when you do not what and how many defects exist. It requires a high degree of creativity to discover defects in Software.

6. Flexible Perfectionists

Software Testing requires the pursuit of perfection. However, the pursuit of perfection must be tempered with flexibility. There are times when perfection may not be attainable or even be feasible. Testers whilst seeking perfection, should adapt a certain degree of flexibility when perfection is not an ideal goal to seek. As an example, when testers report bugs, they must also pursue a fix for the bug. Now, a fix need not just mean fixing the software. It could be a statement in the release notes or other documentation that highlights the defect as a known and open issue; it could be a case of marketing toning down its spiel or enlightening customers about the potential issue - in the real world, it may not be possible to fix every defect that testers want fixed. Being able to prioritize and pick your battles appropriately, knowing when to give in and when to stick to your guns is important.

7. Good Judgement

There's a saying, good judgement results from experience and experience results from bad judgement ! Good judgement when combined with the other tester skills, makes for successful software testing efforts. Judgement involves tasks such as deciding on what to test, how much to test, estimating the time it would take to test and taking a call if an issue is really a defect or if a bug is worthy of deeper pursuit. Using judgement to determine the extent of testing to be performed involves comparison of the current project with past projects for estimating the quantum of risk. While this trait can produce results, it results from experience and knowledge gained over time and across projects.

8. Tact and Diplomacy

Software Testing involves providing information and often we carry "bad news". An important part of the testing job is telling developers that their code is defective, highlighting the issue and possible causes. At a human level, it is like telling a parent that their baby is ugly. Contrary to popular belief that testers and developers must be at loggerheads, Software testers need to have a good working relationship with developers. Co-operation between both functions is key to producing a quality software product. Tact and diplomacy is important to both convey bad news, follow up for fixes and maintain cordial relationships. Successful testers know how to do the balancing act and deal with developers tactfully and professionally even in cases where the other party is not very diplomatic.

9. Persuasive

This trait continues from the previously mentioned trait - tact and diplomacy. Once the tester breaks the news about issues in the code, a range of reactions can arise. One of the likely reactions to bugs that testers report could be that the reported issue is categorized as not being important/severe enough to warrant a fix. Bugs may be re-prioritized and downgraded or deferred to a later time frame or be documented as an open issue. Just because the tester thinks a bug must be fixed does not mean that developers will agree and jump on fixing it. If a bug needs fixing, testers must be persuasive and clearly state the reasons for requiring a fix in a specified time frame. In case of a stalemate, communicating effectively to stakeholders and getting their inputs may also be required. Persuasion goes hand-in-hand with the other traits mentioned earlier, to ensure issues are addressed appropriately. 

10. Testing is in the DNA

Finally, Software Testers never really stop testing. Testing does not end when all the current set of test cases are completed or specifications covered. Testers continue evaluating the product in ways that may not be covered in the requirements or specifications. Testers think of testing all the time, figuring out newer ways to break software.

Tester - skills needed for successful software testing (3)

We continue exploring the traits needed for testers to be successful in testing software. The previous post looked at traits 4-6. In this post, let us look at three more Software Tester traits.

Tester Trait 7: Good judgement (judgment)

There's a saying that good judgement results from experience and experience results from bad judgement! Good judgement when combined with the other tester skills, can make for highly successful software testing. Judgement involves elements such as deciding on what to test, how much to test, estimating the time it would take to test and taking a call if an issue is really a defect or if a bug is worthy of deeper pursuit. Using judgement to determine the extent of testing to be performed involves comparison of the current project with past projects for estimating the quantum of risk. While this trait can produce results, it results from experience and knowledge gained over time and across projects.


Tester Trait 8: Tact and Diplomacy

Software Testing involves providing information and often we carry "bad news". An important part of the testing job is telling developers that their code is defective, highlighting issues and possible causes. At a human level, it is like telling a parent that their baby is ugly. Contrary to popular belief that testers and developers must be at loggerheads, Software testers need to have a good working relationship with developers. Co-operation between both functions is key to producing a quality software product. Tact and diplomacy is important to both convey bad news, follow up for fixes and maintain cordial relationships. Successful testers know how to do the balancing act and deal with developers tactfully and professionally even in cases where the other party is not very diplomatic.

Tester Trait 9: Persuasive

This trait continues from the previously mentioned trait - tact and diplomacy. Once the tester breaks the news about issues in the code, a range of reactions can arise. One of the likely reactions to bugs that testers report could be that the reported issue is categorized as not being important/severe enough to warrant a fix. Bugs may be re-prioritized and downgraded or deferred to a later time frame or be documented as an open issue. Just because the tester thinks a bug must be fixed does not mean that developers will agree and jump on fixing it. If a bug needs fixing, testers must be persuasive and clearly state the reasons for requiring a fix in a specified time frame. In case of a stalemate, communicating effectively to stakeholders and getting their inputs may also be required. Persuasion goes hand-in-hand with the other traits mentioned earlier, to ensure issues are addressed appropriately. 

Coming up next is a neatly consolidated article on the top 10 skills required to be a successful Software Testing professional covering all the attributes we have seen thus far plus an extra skill.