Agile

The subject of this post is probably not a suitable indicator of what's in this entry. I was intending to talk about Agile testing and then shifted gears to begin with trying to shed some light on the basic question - what is Agile ? In subsequent posts, i hope to focus more on testing in an agile context.

What is Agile (in the context of Software development) ? Is it a buzzword ? Is it what the dictionary defines "agile" to be - adaptable, able to move quickly, respond quickly ? Well if the dictionary definition were true, then i'd say almost everyone involved with producing software would want to be agile.

Agile refers to a collection of methodologies that enable agility in producing software. Common agile methods include Scrum, XP, etc. The Agile manifesto describes the values of the agile community. These are listed below.

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

While agile methods do not intend to do away with the items on the right, they lay a greater emphasis on the items on the left and that can present a significant difference in how software is produced vis-a-vis the non-agile / traditional models of development (and testing).

Testing needs to be business driven

Testing needs to be business driven and customer focused. Generally testers may not accept the fact that their test efforts are not really driven by what is important to the business or customer. It is fairly commonplace to have quality criterion defined only by the testing group without having consulted stakeholders or trying to figure out if the criteria defined matches what the business or customers think quality should be. The customer or business must decide and define the scope of the term quality in the context of what is being delivered and testing must focus on this definition.

Deciding on what needs to be tested, testing prioritization, testing risks management – needs inputs from the customer or their representatives. It should not be the case that the testing group alone decides on what to test, scope and extent of testing, areas to be tested more/less, etc. In the agile world, the ability of testing as a function to adapt to delivering what the business needs is critical for the group to add significant value. Test planning and development should not be limited to the testing team and should involve other stakeholders to understand what is important to the business as opposed to what the group thinks might be important.

NASSCOM Product Conclave 2009

I was away for couple of days attending the NASSCOM Product Conclave 2009.

I liked the talk by Guy Kawasaki. There were a few other thought provoking sessions and some interesting panelists / speakers.

Testing vs field observed defects

Myers put forth the counter-intuitive principle in software testing which states that the more defects found during formal testing the more that remained to be found later.

There seems to be a positive co-relation between the rate of defects found during formal testing with the rate of defects reported from the field. Higher rate of defects reported during a formal testing exercise usually means that there has either been a higher rate of error injection during the development process or that a new and more effective approach to testing has been followed. It could also be the case that a lot of additional and extra-ordinary test effort was expended resulting in the higher rate of defects being found.

A popular analogy to describe the relationship between defect rates during formal testing and field trials is to consider the overall defect rate as an iceberg. The tip that is visible is likened to the defects found during testing and the submerged portion as the latent field defect rate. The overall size of this iceberg is determined by the level of error injection during development. Formal testing normally happens once the code is developed and integrated by which time the “iceberg” is already formed. The larger the tip that is visible, the larger would be the entire iceberg.

This does not mean that we just get into a mode of acceptance about the latent defects that would be revealed during field usage. We can take steps to reduce the extent of the latent defects and bring up more of the iceberg above water. It must be stressed that managing of quality of the development process is important and can contribute towards reducing the rate of error injection. Prevention is definitely better than trying to determine and fix defects (in the process probably introducing other defects). Even with robust processes, some amount of error injection cannot be ruled out and this is where practices such as good design & code reviews and inspections are needed. Additionally, unit and integration tests by developers prior to checking in code into the repository should help reduce the number of defects that are left lurking around. The testing team must also continually enhance their tests, improve coverage and analyse defect rates & trends across releases to make sure that testing is doing its best to find as many issues as it can.

Defect & Effort

It is observed that while keeping other factors such as skill levels, processes, tools and technology constant, there tends to be a linear relationship between defect and effort. Human errors cause defects to be introduced at a constant rate. The rate of introducing defects may however be altered by making improvements in the development process, better training, changing schedules, improved staffing, use of better tools and techniques and so on. This could also be looked at in terms of the relationship between defect arrival rate to the code development rate which in turn is related to the effort.


Privacy of test data

Privacy of data used in testing is something that organizations must consider. It is not uncommon to observe organizations using a copy of their production data to facilitate testing of their applications. This usage automatically exposes private data to internal constituents such as testers, administrators of the database, developers and others who have access to the data. Organizations tend to assume that since the test data and its associated environment reside within the organization's firewall, this data would be safe. In addition, the focus on securing test environments is often not high on the priority list. However, the fact remains that employees now have access to private data which include items such as credit card information, financial data, ssns, etc. Providing such access violates privacy regulations, enables data theft and misuse by internal staff and even exposes the data to external hacking. Given the levels of security surrounding a test environment, all that hackers need to do is to break into the corporate network and help themselves to the data mine resident in the test databases.

The reasoning for use of production data in testing is to perform real-life and comprehensive testing of the application. While this may be true, organizations cannot ignore the risks involved in simply using a copy of production data as-is on the test databases. Couple of techniques that may be followed to mitigate the risks would be – to generate test data and to mask sensitive data.

Generating test data eliminates the need to use copies of production data. Organizations may choose to use a mix of production (non-sensitive) data along with generated (sensitive fields such as card numbers, etc) data. Test data generation is not as simple as it sounds. Difficulties in generating data that represent the various possible real life use cases is not an easy task. The greater the complexity of the application being tested, the greater is the difficulty in generating suitable test data.

Masking of production data is another technique that may be used to maintain data privacy. Masking is also known as scrubbing or sanitization of data. Sensitive data is masked using various algorithms so that private data remains hidden from view. Several vendors offer data masking solutions. The advantage of masking data is that testing can happen with real data. However, data masking for larger and complex applications requires considerable effort and expense to implement.

The pesticide paradox

An interesting analogy comparing Software Testing with the use of pesticides in farming was presented by Beizer in his book on Software Testing techniques. He called it the pesticide paradox.

Repetitive use of the same pesticide mix to eliminate insects during farming will over time lead to the insects developing resistance to the pesticide thereby rendering the mix ineffective. A similar phenomenon may be seen while testing software. As testers keep repeating the same set of tests over an over again, the software being tested develops immunity to these tests and fewer defects show up. As you execute the same set of repetitive tests over an over again, your software eventually builds up resistance resulting in nothing new being revealed by the tests.

Further, every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffectual. In order to overcome the pesticide paradox, testers must regularly develop newer tests exercising the various parts of the system and their inter-connections to find additional defects. Also, testers cannot forever rely on existing test techniques or methods and must be on the look out to continually improve upon existing methods to make testing more effective.
***
Liked this entry? Join my community of professional testers to receive fresh updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Your email id will remain confidential. Subscriptions are handled by Google's FeedBurner service.

Defects are useful

Defects provide real, observable data regarding a project's progress both in terms of quality and schedule. I once came across an interesting analogy of comparing defects to pain experienced in the human body. Pain is a great way for the body to provide feedback without which we could cause serious harm to ourselves without even realizing it. Defects may be considered to be the equivalent while developing software. While both defects and pain are something we wish to avoid and eliminate, their presence signifies underlying symptoms that need cure. Analysis of defects and pain by competent professionals leads to unearthing and cure or fixing of the issues which in turn ensure better health of the system.

A characteristic of defects are – they are real and observable manifestations. Defects help indicate the progress of software development, the effectiveness of the development process and potential for improvement and the quality of the product being developed. Defects can be counted, charted, predicted and provide a wealth of information and insight into the product as well as the development effort.

Wide band Delphi (WBD)

Wide band Delphi (WBD) is a structured estimation technique involving an expert group. There is a lot of literature around the details of implementing this technique. In brief, this technique involves getting a group of “experts” to make estimates, discuss their assumptions and arrive at a consensus estimate. The estimates made by a group of experts with their varied perspectives are expected to be better than that made by any single individual who may not have the breadth or depth of understanding about the various activities involved.

In this technique, the team of experts begins by analysing the scope / specification of the work being estimated, brainstorms assumptions and creates a work-breakdown structure (WBS). Members of the team then make estimates individually for items in the WBS and note any further changes to the WBS and assumptions. The team then meets together to arrive at a consensus on the estimates. The meeting is facilitated by a moderator who charts the estimates without revealing the estimators and guides the group towards understanding the range of estimates, clarifying any assumptions, revising estimates, in a cyclical process until a consensus is reached.

While implementing the WBD technique it is important to assemble the appropriate team to generate estimates. It is a good idea to involve representatives from different functions who have a stake in the product so they together can agree upon the estimates and feel a sense of ownership of the plan. The technique is useful for new projects or projects where there are multiple factors and uncertainty. WBD helps refine and develop the WBS as well as clarify assumptions around estimates. The technique however, does take time and requires multiple experts to come together and make estimates.

Software Complexity

In the previous post we looked at a Software complexity metric known as Cyclomatic Complexity. In addition to this measure, there are various other metrics used to measure complexity and these include - measure of size by way of lines of code or function points (in turn translated to lines of code), Halstead's Complexity measure, Information Flow Complexity (IFC - IEEE 982.2), metrics for measuring complexity of Object Oriented code, etc.

The basic theory behind measuring complexity is – the greater the complexity of the code, the more difficult it is to test & maintain. Increased complexity leads to higher probability of defects and greater difficulty with maintaining the code. Complexity metrics are used to predict defect proneness and maintenance productivity, help identify code that needs to be simplified as well as areas at greater risk of defects and areas where additional testing may be needed.

While these metrics focus on complexity of the structure of the software, we must also remember that software complexity is not limited to structure or design and includes aspects such as complexity of the computation being performed (Computational Complexity – from a computational standpoint, not necessarily the human perspective) and complexity in understanding (Conceptual Complexity – from a human programmer standpoint).

Cyclomatic Complexity

One of the more popular complexity measures is McCabe's Cyclomatic Complexity (CC). The theory behind CC is simple: CC is a measure of the number of control flows within a module. A module is defined as a set of executable code that has an entrance and an exit. Control flow helps determine the number of paths through the module. The greater the number of paths through the module, the greater is the module's complexity.

The cyclomatic number for a module is equivalent to the number of linearly independent paths through the module and can be used to determine the minimum number of distinct tests that must be executed to test every executable statement at least once.

CC measurements may be performed ..

1. by counting the nodes (correspond to the corners) and edges (correspond to the bodies of the arrows) of the module graph
CC = # of edges - # of nodes + 2
2. by counting the number of binary decision points.
CC = # of binary decisions + 1

After we calculate the CC number for a module, what do we do with it and what does the CC number mean ? Stated simply, a higher CC signifies greater complexity of the module and corresponds to greater difficulty to test and maintain the module. Rules have been put forth on interpreting CC numbers. One such rule indicates that a CC > 20 signifies a high degree of complexity and risk of code being prone to defects. There are also rules that try to predict the probability of introducing regressions or inserting defects while trying to fix another defect, using the CC number. Here too, the higher CC corresponds to a greater probability of introducing new / additional defects while trying to make fixes to other defects. CC is helpful in trying to gain an insight into the difficulty to maintain and test code.

The following are extensions of Cyclomatic Complexity.
  • CCD (Cyclomatic Complexity Density) is used to predict maintenance productivity and is derived by dividing CC by LOC (Lines of Code). Higher CCD corresponds to lower maintenance productivity.
  • ECC (Essential Cyclomatic Complexity) measures the cyclomatic complexity after the structured constructs (such as if, while, case, sequence) are removed.

I've been away

I've been away from blogging and fairly busy over the past few weeks. Hope to be back with posts and updates on the very interesting subject of Software Quality & Testing.

Feel free to send any comments or feedback my way.